id
stringlengths
2
8
revid
stringlengths
1
10
url
stringlengths
38
44
title
stringlengths
1
184
text
stringlengths
101
448k
6724
19951496
https://en.wikipedia.org/wiki?curid=6724
Copacabana, Rio de Janeiro
Copacabana ( , , ) is a Brazilian (neighbourhood) located in the South Zone of the city of Rio de Janeiro, Brazil. It is most prominently known for its 4 km (2.5 miles) balneario beach, which is one of the most famous in the world. History. The district was originally called (translated from the Tupi language, it means "the way of the ", the being a kind of heron) until the mid-18th century. It was renamed after the construction of a chapel holding a replica of the Virgen de Copacabana, the patron saint of Bolivia. Characteristics. Copacabana begins at Princesa Isabel Avenue and ends at Posto Seis (lifeguard watchtower Six). Beyond Copacabana, there are two small beaches: one, inside Fort Copacabana and the other, right after it: Diabo ("Devil") Beach. Arpoador beach, where surfers go after its perfect waves, comes next, followed by the famous borough of Ipanema. The area served as one of the four "Olympic Zones" during the 2016 Summer Olympics. According to Riotur, the Tourism Secretariat of Rio de Janeiro, there are 63 hotels and 10 hostels in Copacabana. Copacabana Beach. Copacabana beach, located at the Atlantic shore, stretches from Posto Dois (lifeguard watchtower Two) to Posto Seis (lifeguard watchtower Six). Leme is at Posto Um (lifeguard watchtower One). There are historic forts at both ends of Copacabana beach; Fort Copacabana, built in 1914, is at the south end by Posto Seis and Fort Duque de Caxias, built in 1779, at the north end. Many hotels, restaurants, bars, nightclubs and residential buildings are located in the area. On Sundays and holidays, one side of Avenida Atlântica is closed to cars, giving residents and tourists more space for activities along the beach. Copacabana Beach plays host to millions of revellers during the annual New Year's Eve celebrations, and for the first three editions of the tournament, has been the official venue of the FIFA Beach Soccer World Cup. Copacabana promenade. The Copacabana promenade is a pavement landscape in large scale (4 kilometres long). It was rebuilt in 1970 and has used a black and white Portuguese pavement design since its origin in the 1930s: a geometric wave. The Copacabana promenade was designed by Roberto Burle Marx. Living standard. Copacabana has the 12th highest Human Development Index in Rio; the 2000 census put the HDI of Copacabana at 0.902. Neighbourhood. According to the IBGE, 160,000 people live in Copacabana and 44,000 or 27.5% of them are 60 years old or older. Copacabana covers an area of 5.220 km which gives the borough a population density of 20,400 people per km. Residential buildings eleven to thirteen stories high built next to each other dominate the borough. Houses and two-story buildings are rare. When Rio was the capital of Brazil, Copacabana was considered one of the best neighborhoods in the country. Transportation. More than 40 different bus routes serve Copacabana, as do three subway Metro stations: Cantagalo, Siqueira Campos and Cardeal Arcoverde. Three major arteries parallel to each other cut across the entire borough: Avenida Atlântica (Atlantic Avenue), which is a 6-lane, 4 km avenue by the beachside, Nossa Senhora de Copacabana Avenue and Barata Ribeiro/Raul Pompéia Street both of which are 4 lanes and 3.5 km in length. Barata Ribeiro Street changes its name to Raul Pompéia Street after the Sá Freire Alvim Tunnel. Twenty-four streets intersect all three major arteries, and seven other streets intersect some of the three. Notable events. . 11 of the 15 FIFA Beach Soccer World Cups have taken place here. New Year's Eve in Copacabana. The fireworks display in Rio de Janeiro to celebrate New Year's Eve is one of the largest in the world, lasting 15 to 20 minutes. It is estimated that 2 million people go to Copacabana Beach to see the spectacle. The festival also includes a concert that extends throughout the night. The celebration has become one of the biggest tourist attractions of Rio de Janeiro, attracting visitors from all over Brazil as well as from different parts of the world, and the city hotels generally stay fully booked. The celebration is broadcast live on major Brazilian radio and television networks, including TV Globo. History. New Year's Eve has been celebrated on Copacabana beach since the 1950s when cults of African origin such as Candomblé and Umbanda gathered in small groups dressed in white for ritual celebrations. The first fireworks display occurred in 1976, sponsored by a hotel on the waterfront and this has been repeated ever since. In the 1990s the city saw it as a great opportunity to promote the city and organized and expanded the event. An assessment made during the New Year's Eve 1992 highlighted the risks associated with increasing crowd numbers on Copacabana beach after the fireworks display. Since the 1993-94 event concerts have been held on the beach to retain the public. The result was a success with egress spaced out over a period of 2 hours without the previous turmoil, although critics claimed that it denied the spirit of the New Year's tradition of a religious festival with fireworks by the sea. The following year Rod Stewart beat attendance records. Finally, the Tribute to Tom Jobim - with Gal Costa, Gilberto Gil, Caetano Veloso, Chico Buarque, and Paulinho da Viola - consolidated the shows at the Copacabana Réveillon. There was a need to transform the fireworks display in a show of the same quality. The fireworks display was created by entrepreneurs Ricardo Amaral and Marius. From the previous 8–10 minutes the time was extended to 20 minutes and the quality and diversity of the fireworks was improved. A technical problem in fireworks 2000 required the use of ferries from New Year's Eve 2001–02. New Year's Eve has begun to compete with the Carnival, and since 1992 it has been a tourist attraction in its own right. There was no celebration in 2020–21 due to the COVID-19 pandemic, but the fireworks show went on.
6725
48128642
https://en.wikipedia.org/wiki?curid=6725
Cy Young Award
The Cy Young Award is given annually to the best pitchers in Major League Baseball (MLB), one each for the American League (AL) and National League (NL). The award was introduced in 1956 by Baseball Commissioner Ford C. Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. The award was originally given to the single best pitcher in the major leagues, but in 1967, after the retirement of Frick, the award was given to one pitcher in each league. Each league's award is voted on by members of the Baseball Writers' Association of America (BBWAA). Local BBWAA chapter chairmen in each MLB city recommend two writers to vote for each award. Final approval comes from the BBWAA national secretary-treasurer. Writers vote for either the American League or National League awards, depending on the league in which their local team plays. A total of 30 writers vote for each league's awards. Writers cast their votes prior to the start of postseason play. As of the 2010 season, each voter places a vote for first, second, third, fourth, and fifth place among the pitchers of each league. The formula used to calculate the final scores is a weighted sum of the votes. The pitcher with the highest score in each league wins the award. If two pitchers receive the same number of votes, the award is shared. From 1970 to 2009, writers voted for three pitchers, with the formula of five points for a first-place vote, three for a second-place vote and one for a third-place vote. Before 1970, writers only voted for the best pitcher and used a formula of one point per vote. History. The Cy Young Award was introduced in 1956 by Commissioner of Baseball Ford C. Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. Originally given to the single best pitcher in the major leagues, the award changed its format over time. From 1956 to 1966, the award was given to one pitcher in Major League Baseball. After Frick retired in 1967, William Eckert became the new Commissioner of Baseball. Due to fan requests, Eckert announced that the Cy Young Award would be given out both in the American League and the National League. From 1956 to 1958, a pitcher was not allowed to win the award on more than one occasion; this rule was eliminated in 1959. After a tie in the 1969 voting for the Cy Young Award, the process was changed, in which each writer was to vote for three pitchers: the first-place vote received five points, the second-place vote received three points, and the third-place vote received one point. The first recipient of the Cy Young Award was Don Newcombe of the Dodgers. The Dodgers are the franchise with the most Cy Young Awards. In 1957, Warren Spahn became the first left-handed pitcher to win the award. In 1963, Sandy Koufax became the first pitcher to win the award in a unanimous vote; two years later he became the first multiple winner. In 1978, Gaylord Perry (age 40) became the oldest pitcher to receive the award, a record that stood until broken in 2004 by Roger Clemens (age 42). The youngest recipient was Dwight Gooden (age 20 in 1985). In 2012, R. A. Dickey became the first knuckleball pitcher to win the award. In 1974, Mike Marshall became the first relief pitcher to win the award. In 1992, Dennis Eckersley was the first modern closer (first player to be used almost exclusively in ninth-inning situations) to win the award. Since then only one other relief pitcher has won the award, Éric Gagné in 2003 (also a closer). Nine relief pitchers have won the Cy Young Award across both leagues. Steve Carlton in 1982 became the first pitcher to win more than three Cy Young Awards, while Greg Maddux in 1994 became the first to win at least three in a row (and received a fourth straight the following year), a feat later repeated by Randy Johnson. Winners. Multiple winners. Twenty-two (22) pitchers have won the award multiple times. Roger Clemens has won the most awards won, seven. His first and last wins were 18 years apart. Greg Maddux (1992–1995) and Randy Johnson (1999–2002) share the record for the most consecutive awards won with four. Clemens, Johnson, Pedro Martínez, Gaylord Perry, Roy Halladay, Max Scherzer, and Blake Snell are the only pitchers to win the award in both the American League and National League. Sandy Koufax is the only pitcher to win multiple awards during the period when only one award was presented for all of MLB. Roger Clemens was the youngest pitcher to win a second Cy Young Award, while Tim Lincecum is the youngest pitcher to do so in the National League, and Clayton Kershaw is the youngest left-hander to do so. Kershaw is the youngest pitcher to win a third Cy Young Award. Clemens is also the only pitcher to win the award with four different teams; nobody else has done so with more than two different teams. Justin Verlander has the most seasons separating his first (2011) and second (2019) Cy Young Awards. Wins by teams. Only two teams have never had a pitcher win the Cy Young Award. The Brooklyn/Los Angeles Dodgers have won more than any other team with 12. Unanimous winners. There have been 21 players who unanimously won the Cy Young Award, for a total of 28 wins. Six of these unanimous wins were accompanied by a win of the Most Valuable Player award (marked with * below; ** denotes that the player's unanimous win was accompanied by a unanimous win of the MVP Award). In the National League, 12 players have unanimously won the Cy Young Award, for a total of 15 wins. In the American League, nine players have unanimously won the Cy Young Award, for a total of 13 wins. References. Specific General
6728
1299915734
https://en.wikipedia.org/wiki?curid=6728
Antisemitism in Christianity
Some Christian churches, Christian groups, and ordinary Christians express antisemitism—as well as anti-Judaism—towards Jews and Judaism. These expressions of antisemitism can be considered examples of "antisemitism expressed by Christians" or antisemitism expressed by Christian communities. However, the term "Christian antisemitism" has also been used in reference to anti-Jewish sentiments that arise out of Christian doctrinal or theological stances (by thinkers such as Jules Isaac, for example—especially in his book "Jésus et Israël"). The term is also used to suggest that to some degree, contempt for Jews and Judaism is inherent in Christianity as a religion, and as a result, the centralized institutions of Christian power (such as the Catholic Church or the Church of England), as well as governments with strong Christian influences (such as the Catholic Monarchs of Spain), have generated societal structures that have survived and perpetuate antisemitism to the present. This usage particularly appears in discussions about Christian structures of power within society—structures that are referred to as Christian hegemony or Christian privilege; these discussions are part of larger discussions about structural inequality and power dynamics. Antisemitic Christian rhetoric and the resulting antipathy towards Jews date back to early Christianity, resembling pagan anti-Jewish attitudes that were reinforced by the belief that Jews are responsible for the crucifixion of Jesus. Christians imposed ever-increasing anti-Jewish measures over the ensuing centuries, including acts of ostracism, humiliation, expropriation, violence, and murder—measures which culminated in the Holocaust. Christian antisemitism has been attributed to numerous factors, including the fundamental theological differences that exist between the two Abrahamic religions; the competition between church and synagogue; the Christian missionary impulse; a misunderstanding of Jewish culture, beliefs, and practice; and the perception that Judaism was hostile towards Christianity. For two millennia, these attitudes were reinforced in Christian preaching, art, and popular teachings, as well as in anti-Jewish laws designed to humiliate and stigmatise Jews. Modern antisemitism has primarily been described as hatred against Jews as a race (see racial antisemitism), and the most recent expression of it is rooted in 18th-century scientific racism. Anti-Judaism is rooted in hostility towards the entire religion of Judaism; in Western Christianity, anti-Judaism effectively merged with antisemitism during the 12th century. Scholars have disagreed about the role which Christian antisemitism played in the rise of Nazi Germany, World War II, and the Holocaust. The Holocaust forced many Christians to reflect on the role(s) Christian theology and practice played—and still play in—anti-Judaism and antisemitism. Early differences between Christianity and Judaism. The legal status of Christianity and Judaism differed within the Roman Empire: because the practice of Judaism was restricted to the Jewish people and Jewish proselytes, adherents of it were generally exempt from adhering to the obligations that were imposed on adherents of other religions by the Roman imperial cult. Since the reign of Julius Caesar, Judaism enjoyed the status of a "licit religion", but occasional persecutions still occurred, such as Tiberius' conscription and expulsion of Jews in 19 AD followed by Claudius' expulsion of Jews from Rome. Christianity however was not restricted to one people, and because Jewish Christians were excluded from the synagogue (see Council of Jamnia), they also lost the protected status that was granted to Judaism, even though that protection still had its limits (see Titus Flavius Clemens (consul), Rabbi Akiva, and Ten Martyrs). From the reign of Nero onwards, who is said (by Tacitus) to have blamed the Great Fire of Rome on Christians, the practice of Christianity was criminalized and Christians were frequently persecuted, but the persecution differed from region to region. Comparably, Judaism suffered setbacks due to the Jewish–Roman wars, and these setbacks are remembered in the legacy of the Ten Martyrs. Robin Lane Fox traces the origin of much of the later hostility to this early period of persecution, when the Roman authorities commonly tested the faith of suspected Christians by forcing them to pay homage to the deified emperor. Jews were exempt from this requirement as long as they paid the , and Christians (many or mostly of Jewish origin) would say that they were Jewish but they refused to pay the tax. This claim had to be confirmed by the local Jewish authorities, who were likely to refuse to accept the Christians as fellow Jews, which often lead to their execution. The was often brought forward as support for this charge that the Jews were responsible for the Persecution of Christians in the Roman Empire. Systematic persecution of Christians lasted until Constantine's conversion to Christianity. In 390 Theodosius I made Christianity the state church of the Roman Empire. While pagan cults and Manichaeism were suppressed, Judaism retained its legal status as a licit religion, but anti-Jewish violence still occurred. In the 5th century, some legal measures worsened the status of the Jews in the Roman Empire. Issues which Judaism has with the New Testament. Jesus as the Messiah. In Judaism, Jesus is not recognized as the Messiah and is viewed as one of many failed Jewish Messiah claimants and a false prophet, a stance acknowledged by Christians as the Jewish people's rejection of him. In Judaism, the belief is that the arrival of the prophesied Messianic Age is contingent upon the coming of the Messiah. Consequently, the comprehensive rejection of Jesus as either the Messiah or a divine figure has not been a pivotal concern within Jewish theological discourse. Jewish deicide. Jewish deicide is the belief that Jews to this day will always be collectively responsible for the killing of Jesus, also known as the blood curse. Even before the Gospels were finalized, Paul described the Jews as those "who killed both the Lord Jesus and the prophets" in his First Epistle to the Thessalonians 2:14–15. A justification of the deicide charge also appears in the Gospel of Matthew 27:24–25, alleging a crowd of Jews told Pilate that they and their children would be responsible for Jesus's death. The Acts of the Apostles, written by the same author as the Gospel of Luke, repeatedly reproach the Jews for having "crucified and killed" Jesus. The Gospel of John exhibits a hostile tone towards 'the Jews', particularly in verses like John 5:16, 6:52, 7:13, 8:44, 10:31, and others, which also implicate them in Jesus' death. Most members of the Church of Jesus Christ of Latter-day Saints accept the notion of Jewish deicide, while the Catholic Church repudiated it in 1965, as have several other Christian denominations. Criticism of the Pharisees. Many New Testament passages criticise the Pharisees, a Jewish social movement and school of thought that flourished during the Second Temple period (516 BCE–70 CE). It has been argued that these passages shaped the way in which Christians viewed and continue to view Jews. Like most Bible passages, however, they can be interpreted in a variety of ways. Today, mainstream Rabbinical Judaism is directly descended from the Pharisaical tradition, which Jesus frequently criticized. During Jesus's life and at the time of his execution, the Pharisees were only one of several Jewish groups, such as the Sadducees, Zealots, and Essenes, that mostly died out not long after the period; Jewish scholars such as Harvey Falk and Hyam Maccoby have suggested that Jesus was himself a Pharisee. In the Sermon on the Mount, for example, Jesus says, "The Pharisees sit in Moses' seat, therefore do what they say". Arguments by Jesus and his disciples against certain groups of Pharisees and what he saw as their hypocrisy were most likely examples of disputes among Jews and internal to Judaism that were common at the time (see, for example, Hillel and Shammai). Recent studies of antisemitism in the New Testament. Professor Lillian C. Freudmann, author of "Antisemitism in the New Testament" (University Press of America, 1994), has published a detailed study of the description of Jews in the New Testament and the historical effects that such passages have had in the Christian community throughout history. Similar studies of such verses have been made by both Christian and Jewish scholars, including Professors Clark Williamson (Christian Theological Seminary), Hyam Maccoby (The Leo Baeck Institute), Norman A. Beck (Texas Lutheran College), and Michael Berenbaum (Georgetown University). Most rabbis feel that these verses are anti-Semitic, and many Christian scholars in America and Europe have reached the same conclusion. Another example is John Dominic Crossan's 1995 book, titled "Who Killed Jesus? Exposing the Roots of Anti-Semitism in the Gospel Story of the Death of Jesus". Crossan writes: "The passion-resurrection stories... have been the seedbed of Christian anti-Judaism. And without that Christian anti-Judaism, lethal and genocidal European anti-Semitism would have been impossible or at least not widely successful. What was at stake in those passion stories in the long-haul of history, was the Holocaust." Some biblical scholars have also been accused of holding anti-Semitic beliefs. Bruce J. Malina, a founding member of The Context Group, has come under criticism for going as far as to deny the Semitic ancestry of modern Israelis. He then ties it back to his work on first-century cultural anthropology. Church Fathers. After Paul's death, Christianity emerged as a separate religion, and Pauline Christianity emerged as the dominant form of Christianity, especially after Paul, James and the other apostles agreed on a compromise set of requirements. Some Christians continued to adhere to aspects of Jewish law, but they were few and often considered heretics by the Church. One example is the Ebionites, who seemed to have denied the virgin birth of Jesus, the physical Resurrection of Jesus, and most of the books that were later canonized as the New Testament. For example, the Ethiopian Orthodox continue Old Testament practices such as the Sabbath. As late as the 4th century Church Father John Chrysostom complained that some Christians were still attending Jewish synagogues. The Church Fathers identified Jews and Judaism with heresy and declared the people of Israel to be ('outside of God'). Peter of Antioch. Peter of Antioch referred to Christians that refused to venerate religious images as having "Jewish minds". Marcion of Sinope. In the early second century AD, the heretic Marcion of Sinope () declared that the Jewish God was a different God, inferior to the Christian one, and rejected the Jewish scriptures as the product of a lesser deity. Marcion's teachings, which were extremely popular, rejected Judaism not only as an incomplete revelation, but as a false one as well, but, at the same time, allowed less blame to be placed on the Jews personally for having not recognized Jesus, since, in Marcion's worldview, Jesus was not sent by the lesser Jewish God, but by the supreme Christian God, whom the Jews had no reason to recognize. In combating Marcion, orthodox apologists conceded that Judaism was an incomplete and inferior religion to Christianity, while also defending the Jewish scriptures as canonical. Tertullian. The Church Father Tertullian () had a particularly intense personal dislike towards the Jews and argued that the Gentiles had been chosen by God to replace the Jews because they were worthier and more honorable. Origen of Alexandria () was more knowledgeable about Judaism than any of the other Church Fathers, having studied Hebrew, met Rabbi Hillel the Younger, consulted and debated with Jewish scholars, and been influenced by the allegorical interpretations of Philo of Alexandria. Origen defended the canonicity of the Hebrew Bible and defended Jews of the past as having been chosen by God for their merits. Nonetheless, he condemned contemporary Jews for not understanding their own Law, insisted that Christians were the "true Israel", and blamed the Jews for the death of Christ. He did, however, maintain that Jews would eventually attain salvation in the final "apocatastasis". Hippolytus of Rome () wrote that the Jews had "been darkened in the eyes of your soul with a darkness utter and everlasting." Augustine of Hippo. Patristic bishops of the patristic era such as Augustine of Hippo argued that the Jews should be left alive and suffering as a perpetual reminder of their murder of Christ. Like his anti-Jewish teacher, Ambrose of Milan, he defined Jews as a special subset of those damned to hell. As "Witness People", he sanctified collective punishment for the Jewish deicide and enslavement of Jews to Catholics: "Not by bodily death, shall the ungodly race of carnal Jews perish[...] 'Scatter them abroad, take away their strength. And bring them down O Lord. Augustine claimed to "love" the Jews but as a means to convert them to Christianity. Sometimes he identified all Jews with the evil of Judas Iscariot and developed the doctrine (together with Cyprian) that there was "no salvation outside the Church". John Chrysostom. John Chrysostom and other church fathers went further in their condemnation; the Catholic editor Paul Harkins wrote that St. John Chrysostom's anti-Jewish theology "is no longer tenable[...] For these objectively unchristian acts, he cannot be excused, even if he is the product of his times." John Chrysostom held, as most Church Fathers did, that the sins of all Jews were communal and endless; to Chrysostom, his Jewish neighbors were the collective representation of all alleged crimes of all preexisting Jews. All Church Fathers applied the passages of the New Testament concerning the alleged advocation of the crucifixion of Christ to all Jews of their day, holding that the Jews were the ultimate evil. However, Chrysostom went so far as to say that because Jews rejected the Christian God in human flesh, Christ, they therefore deserved to be killed: "grew fit for slaughter." In citing the New Testament, he claimed that Jesus was speaking about Jews when he said, "as for these enemies of mine who did not want me to reign over them, bring them here and slay them before me." Jerome. St. Jerome identified Jews with Judas Iscariot and the immoral use of money ("Judas is cursed, that in Judas the Jews may be accursed[...] their prayers turn into sins"). Jerome's homiletical assaults, which may have served as the basis for the anti-Jewish Good Friday liturgy, contrasts Jews with the evil, and that "the ceremonies of the Jews are harmful and deadly to Christians", whoever keeps them was doomed to the devil: "My enemies are the Jews; they have conspired in hatred against Me, crucified Me, heaped evils of all kinds upon Me, blasphemed Me." Ephraim the Syrian. Ephraim the Syrian wrote polemics against Jews in the 4th century, including the repeated accusation that Satan dwells among them as a partner. The writings were directed at Christians who were being proselytized by Jews. Ephraim feared that they were slipping back into Judaism; thus, he portrayed the Jews as enemies of Christianity, like Satan, to emphasize the contrast between the two religions, namely, that Christianity was Godly and true and Judaism was Satanic and false. Like Chrysostom, his objective was to dissuade Christians from reverting to Judaism by emphasizing what he saw as the wickedness of the Jews and their religion. Middle Ages. In 7th century Spain, Visigoth Christian rulers and the Spanish Church's Councils of Toledo implemented policies of forced conversions and expulsions of Jews. Later in the 12th century Bernard of Clairvaux said "For us the Jews are Scripture's living words, because they remind us of what Our Lord suffered. They are not to be persecuted, killed, or even put to flight." According to Anna Sapir Abulafia, most scholars agree that Jews and Christians in Latin Christendom lived in relative peace with one another until the 13th century. Massacres. Starting in the 11th century, the Crusades unleashed a wave of antisemitism, with attacks, massacres and forced conversions of Jews, which continued to occur throughout the Middle Ages. While Muslims of the Holy Land were the primary targets, the Crusades soon expanded to other perceived enemies of Christianity inside Europe - pagans (Northern Crusades) and heretics (Albigensian Crusade). Jews become targets of the Crusaders, due to their being viewed as "enemies of God", responsible for Christ's crucifixion. The knights of the First Crusade perpetrated the Rhineland massacres of Jews in 1096, while the Second Crusade led to massacres in France. The gathering for the Third Crusade in 1189-1190 brought about massacres of Jews in London, Northampton and York Further massacres followed in Franconia (1298), and in France in 1320 as part of the Shepherds' Crusade. The 1391 massacres of Jews in Spain, proved to be especially deadly, forcing many to convert. A prime mover of the violence in Spain was Archdeacon Ferrand Martinez, who called for the persecution of the Jews in his homilies and speeches, claiming that he was obeying God's commandment. In Austria in 1420 all Jews were arrested and jailed, with 200 burned alive on the pyre. Expulsions. Beyond massacres, Jews were repeatedly expelled from Europe. In 1290, King Edward I expelled all Jews from England; they were not permitted to return until 1656. Similar expulsions followed in France in 1306, Switzerland in 1348 and Germany in 1394, In 1492 the Catholic King and Queen of Spain, gave Jews the choice of either baptism or expulsion, as a result more than 160,000 Jews were expelled. Jews were only allowed officially back into Spain in 1868 with the establishment of a constitutional monarchy that allowed for the practice of faiths other than Catholicism, however, the ability to practice Judaism wasn't fully restored until 1968, when the edict of expulsion was formally repealed. The most common reasons given for these banishments were the need for religious purity, protection of Christian citizens from Jewish money lending, or pressure from other citizens who hoped to profit from the Jews' absence. Other discrimination. Jews were subjected to a wide range of legal disabilities and restrictions in medieval Europe. Jews were excluded from many trades, the occupations varying with place and time, and determined by the influence of various non-Jewish competing interests. Often Jews were barred from all occupations but money-lending and peddling, with even these at times forbidden. Jews' association to money lending would carry on throughout history in the stereotype of Jews being greedy and perpetuating capitalism. Another stereotype that appeared in the 12th century was the blood libel, which alleged that the Jews killed Christian boys and used their blood to make unleavened bread. Such accusations led to persecutions and killing of Jews. In the later medieval period, the number of Jews who were permitted to reside in certain places was limited; they were concentrated in ghettos, and they were also not allowed to own land; they were forced to pay discriminatory taxes whenever they entered cities or districts other than their own. The Oath More Judaico, the form of oath required from Jewish witnesses, developed bizarre or humiliating forms in some places, e.g. in the Swabian law of the 13th century, the Jew would be required to stand on the hide of a sow or a bloody lamb. (the "Constitution for the Jews") was the official position of the papacy regarding Jews throughout the Middle Ages and later. The first papal bull was issued in about 1120 by Calixtus II, intended to protect Jews who suffered during the First Crusade, and was reaffirmed by many popes, even until the 15th century although they were not always strictly upheld. The bull forbade, besides other things, Christians from coercing Jews to convert, or to harm them, or to take their property, or to disturb the celebration of their festivals, or to interfere with their cemeteries, on pain of excommunication: Papal restrictions and persecution of Jews. While some popes offered protection to Jews, others implemented restrictive policies and actions that contributed to their marginalization and persecution. A key role was played by Pope Innocent III who justified his calls for lay and Church authorities to restrict Jewish "insolence" by claiming God made Jews slaves for rejecting and killing Christ. He proclaimed them to be the enemies of Christ, who must be kept in a position of social inferiority and prevented from exercising power over Christians. Devaluing testimony of Jews: The Third Lateran Council, convened by Pope Alexander III in 1179, declared the testimony of Christians should be always accepted over the testimony of Jews, that those who believe the testimony of Jews should be anathemized, and that Jews should be subject to Christians. It forbade Christians serving Jews and Muslims in their homes, calling for the excommunication of those who do. Prohibitions on holding public office. The Fourth Lateran Council, of 1215, convened by Pope Innocent III, declared: "Since it is absurd that a blasphemer of Christ exercise authority over Christians, we ... renew in this general council what the Synod of Toledo (589) wisely enacted in this matter, prohibiting Jews from being given preference in the matter of public offices, since in such capacity they are most troublesome to the Christians" These prohibitions remained in effect for centuries. Distinctive clothing and badges: The Fourth Lateran Council required Jews to wear distinctive clothing or badges to distinguish them from Christians. The reason given for this was to enforce prohibitions against sexual intercourse between Christians and Jews and Muslims. This practice of requiring Jews to wear distinctive clothing and badges was reinforced by subsequent popes and became widespread across Europe. Such markings led to threats, extortion and violence against Jews. This requirement was only removed with the Jewish Emancipation following the Enlightenment, but the Nazis revived it. The council also forbade Jews and Muslims from appearing in public during the last three days of Easter. Condemnations and burning of the Talmund: In 1239, Pope Gregory IX sent a letter to priest in France with accusations against the Talmund by a Franciscan. He ordered the confiscation of Jewish books while Jews were gathered in synagogue, and that all such books be "burned at the stake.” Similar instructions were conveyed to the kings of France, England, Spain, and Portugal. 24 wagons of Jewish books were burned in Paris. Additional condemnations of the Talmud were issued by Popes Innocent IV in his bull of 1244, Alexander IV, John XXII in 1320, and Alexander V in 1409. Pope Eugenius IV issued a bull prohibiting Jews from studying the Talmud following the Council of Basle, 1431–43. Spanish Inquisition: In 1478 Pope Sixtus IV issued a bull which authorized the Spanish Inquisition. This institutionalized the persecution of Jews who had converted to Christianity ("conversos"), due to mass violence against Jews by Catholics (e.g. the Massacre of 1391). The Inquisition employed torture and property confiscation, thousands were burned at the stake. In 1492 Jews were given the choice of either baptism or expulsion, as a result more than 160,000 Jews were expelled. Portuguese Inquisition: In 1536 Pope Paul III established the Portuguese Inquisition with a papal bull. The major target of the Portuguese Inquisition were Jewish converts to Catholicism, who were suspected of secretly practicing Judaism. Many of these were originally Spanish Jews who had left Spain for Portugal, when Spain forced Jews to convert to Christianity or leave. The number of these victims (between 1540 and 1765) is estimated at 40,000. Ghettos: In 1555, Pope Paul IV issued the papal bull "Cum nimis absurdum", which forced Jews in the Papal States to live in ghettos. It declared "absurd" that Jews, condemned by God to slavery for their faults, had "invaded" the Papal States and were living freely among Christians. It justified restrictions by asserting that Jews were "slaves" for their deeds, while Christians were "freed" by Jesus, and that Jews should see "the true light" and convert to Catholicism. This policy was later adopted in other parts of Europe. The Roman Ghetto, established in 1555, was one of the best-known Jewish ghettos, existing until the Papal States were abolished in 1870, and Jews were no longer restricted Forced conversions and expulsions: Some popes supported or initiated forced conversions and expulsions of Jews. For example, Pope Pius V expelled Jews from the Papal States in 1569, with the exception of Rome and Ancona. In 1593 Pope Clement VIII expelled the Jews from the Papal States with the bull, "Caeca et Obdurata Hebraeorum perfidia" (meaning "The blind and obdurate perfidy of the Hebrews") Pope Innocent III in 1201 authorized the forced baptism of Jews in southern France, declaring that those who had been forcibly baptized must remain Christian. Restrictions on Jewish economic activities: Various popes imposed restrictions on Jewish economic activities, limiting their professions and ability to own property. In 1555 Pope Paul IV, in his bull "Cum nimis absurdum", prohibited Jews from engaging in most professions, restricting them primarily to moneylending and selling second-hand goods. This bull also forbade Jews from owning real estate and limited them to one synagogue per city. Previously the Fourth Lateran Council, sought ""to protect the Christians against cruel oppression by the Jews"," who extort Christians with "oppressive and immoderate" interest rates. anti-Semitism. Anti-Semitism in popular European Christian culture escalated beginning in the 13th century. Blood libels and host desecration drew popular attention and led to many cases of persecution against Jews. Many believed Jews poisoned wells to cause plagues. In the case of blood libel, it was widely believed that the Jews would kill a child before Easter and needed Christian blood to bake matzo. Throughout history, if a Christian child was murdered accusations of blood libel would arise no matter how small the Jewish population. The Church often added to the fire by portraying the dead child as a martyr who had been tortured, and who had powers like Jesus was believed to. Sometimes the children were even made into saints. Anti-Semitic imagery such as Judensau and Ecclesia et Synagoga recurred in Christian art and architecture. Anti-Jewish Easter holiday customs such as the Burning of Judas continue to the present time. In Iceland, one of the hymns repeated in the days leading up to Easter includes the lines: Persecutions and expulsions. During the Middle Ages in Europe persecutions and formal expulsions of Jews were liable to occur at intervals, and this was also the case for other minority communities, regardless of whether they were religious or ethnic. There were particular outbursts of riotous persecution during the Rhineland massacres of 1096 in Germany, these massacres coincided with the lead-up to the First Crusade, many of the killings were committed by the crusaders as they traveled to the East. There were many local expulsions from cities by local rulers and city councils. In Germany, the Holy Roman Emperor generally tried to restrain the persecution, if only for economic reasons, but it was frequently unable to exert much influence. In the Edict of Expulsion, King Edward I expelled all of the Jews from England in 1290 (after he collected ransom from 3,000 of the wealthiest Jews), based on the accusation that they were practicing usury and undermining loyalty to the dynasty. In 1306, there was a wave of persecution in France, and there were also widespread Black Death Jewish persecutions because many Christians accused the Jews of either causing or spreading the plague. As late as 1519, the Imperial city of Regensburg took advantage of the recent death of Emperor Maximilian I to expel its 500 Jews. "Officially, the medieval Catholic church never advocated the expulsion of all of the Jews from Christendom nor did it repudiate Augustine's doctrine of Jewish witness... Still, late medieval Christendom frequently ignored its mandates". Expulsion of Jews from Spain. The largest expulsion of Jews followed the Reconquista or the reunification of Spain, and it preceded the expulsion of the Muslims who would not convert, despite the protection of their religious rights promised by the Treaty of Granada (1491). On 31 March 1492 Ferdinand II of Aragon and Isabella I of Castile, the rulers of Spain who financed Christopher Columbus' voyage to the New World just a few months later in 1492, declared that all Jews in their territories should either convert to Christianity or leave the country. While some converted, many others left for Portugal, France, Italy (including the Papal States), Netherlands, Poland, the Ottoman Empire, and North Africa. Many of those who had fled to Portugal were later expelled by King Manuel in 1497 or left to avoid forced conversion and persecution. From the Renaissance to the 17th century. Cum Nimis Absurdum. On 14 July 1555, Pope Paul IV issued the papal bull Cum nimis absurdum which revoked all the rights of the Jewish community and placed religious and economic restrictions on Jews in the Papal States, renewed anti-Jewish legislation and subjected Jews to various degradations and restrictions on their freedom. The bull established the Roman Ghetto and required Jews of Rome, which had existed as a community since before Christian times and which numbered about 2,000 at the time, to live in it. The Ghetto was a walled quarter with three gates that were locked at night. Jews were also restricted to one synagogue per city. Paul IV's successor, Pope Pius IV, enforced the creation of other ghettos in most Italian towns, and his successor, Pope Pius V, recommended them to other bordering states. Protestant Reformation. Martin Luther at first made overtures towards the Jews, believing that the "evils" of Catholicism had prevented their conversion to Christianity. When his call to convert to his version of Christianity was unsuccessful, he became hostile to them. In his book "On the Jews and Their Lies", Luther excoriates them as "venomous beasts, vipers, disgusting scum, canders, devils incarnate." He provided detailed recommendations for a pogrom against them, calling for their permanent oppression and expulsion, writing "Their private houses must be destroyed and devastated, they could be lodged in stables. Let the magistrates burn their synagogues and let whatever escapes be covered with sand and mud. Let them be forced to work, and if this avails nothing, we will be compelled to expel them like dogs in order not to expose ourselves to incurring divine wrath and eternal damnation from the Jews and their lies." At one point he wrote: "...we are at fault in not slaying them..." a passage that "may be termed the first work of modern anti-Semitism, and a giant step forward on the road to the Holocaust." Luther's harsh comments about the Jews are seen by many as a continuation of medieval Christian anti-Semitism. In his final sermon shortly before his death, however, Luther preached: "We want to treat them with Christian love and to pray for them so that they might become converted and would receive the Lord," but also in the same sermon stated that Jews were "our public enemy" and if they refused conversion were "malicious," guilty of blasphemy and would work to kill gentile believers in Christ. 18th century. In accordance with the anti-Jewish precepts of the Russian Orthodox Church, Russia's discriminatory policies towards Jews intensified when the partition of Poland in the 18th century resulted, for the first time in Russian history, in the possession of land with a large Jewish population. This land was designated as the Pale of Settlement from which Jews were forbidden to migrate into the interior of Russia. In 1772 Catherine II, the empress of Russia, forced the Jews living in the Pale of Settlement to stay in their "shtetls" and forbade them from returning to the towns that they occupied before the partition of Poland. 19th century. Throughout the 19th century and into the 20th, the Roman Catholic Church still incorporated strong anti-Semitic elements, despite increasing attempts to separate anti-Judaism (opposition to the Jewish religion on religious grounds) and racial anti-Semitism. Brown University historian David Kertzer, working from the Vatican archive, has argued in his book "The Popes Against the Jews" that in the 19th and early 20th centuries the Roman Catholic Church adhered to a distinction between "good anti-Semitism" and "bad anti-Semitism". The "bad" kind promoted hatred of Jews because of their descent. This was considered un-Christian because the Christian message was intended for all of humanity regardless of ethnicity; anyone could become a Christian. The "good" kind criticized alleged Jewish conspiracies to control newspapers, banks, and other institutions, to care only about the accumulation of wealth, etc. Many Catholic bishops wrote articles criticizing Jews on such grounds, and, when they were accused of promoting hatred of Jews, they would remind people that they condemned the "bad" kind of anti-Semitism. Kertzer's work is not without critics. Jewish-Christian relations scholar Rabbi David G. Dalin, for example, criticized Kertzer in the "Weekly Standard" for using evidence selectively. Opposition to the French Revolution. The counter-revolutionary Catholic royalist Louis de Bonald stands out among the earliest figures to explicitly call for the reversal of Jewish emancipation in the wake of the French Revolution. Bonald's attacks on the Jews are likely to have influenced Napoleon's decision to limit the civil rights of Alsatian Jews. Bonald's article (1806) was one of the most venomous screeds of its era and furnished a paradigm which combined anti-liberalism, a defense of a rural society, traditional Christian anti-Semitism, and the identification of Jews with bankers and finance capital, which would in turn influence many subsequent right-wing reactionaries such as Roger Gougenot des Mousseaux, Charles Maurras, and Édouard Drumont, nationalists such as Maurice Barrès and Paolo Orano, and anti-Semitic socialists such as Alphonse Toussenel. Bonald furthermore declared that the Jews were an "alien" people, a "state within a state", and should be forced to wear a distinctive mark to more easily identify and discriminate against them. In the 1840s, the popular counter-revolutionary Catholic journalist Louis Veuillot propagated Bonald's arguments against the Jewish "financial aristocracy" along with vicious attacks against the Talmud and the Jews as a "deicidal people" driven by hatred to "enslave" Christians. Gougenot des Mousseaux's (1869) has been called a "Bible of modern anti-Semitism" and was translated into German by Nazi ideologue Alfred Rosenberg. Between 1882 and 1886 alone, French priests published twenty anti-Semitic books blaming France's ills on the Jews and urging the government to consign them back to the ghettos, expel them, or hang them from the gallows. In Italy, the Jesuit priest Antonio Bresciani's highly popular novel 1850 novel "L'Ebreo di Verona" ("The Jew of Verona") shaped religious anti-Semitism for decades, as did his work for "La Civiltà Cattolica", which he helped launch. Pope Pius VII (1800–1823) had the walls of the Jewish ghetto in Rome rebuilt after the Jews were emancipated by Napoleon, and Jews were restricted to the ghetto through the end of the Papal States in 1870. Official Catholic organizations, such as the Jesuits, banned candidates "who are descended from the Jewish race unless it is clear that their father, grandfather, and great-grandfather have belonged to the Catholic Church" until 1946. 20th century. In Russia, under the Tsarist regime, anti-Semitism intensified in the early years of the 20th century and was given official favor when the secret police forged the "Protocols of the Elders of Zion", a fabricated document purported to be a transcription of a plan by Jewish elders to achieve global domination. Violence against the Jews in the Kishinev pogrom in 1903 was continued after the 1905 revolution by the activities of the Black Hundreds. The Beilis Trial of 1913 showed that it was possible to revive the blood libel accusation in Russia. Catholic writers such as Ernest Jouin, who published the "Protocols" in French, seamlessly blended racial and religious anti-Semitism, as in his statement that "from the triple viewpoint of race, of nationality, and of religion, the Jew has become the enemy of humanity." Pope Pius XI praised Jouin for "combating our mortal [Jewish] enemy" and appointed him to high papal office as a protonotary apostolic. From WWI to the eve of WWII. In 1916, in the midst of the First World War, American Jews petitioned Pope Benedict XV on behalf of the Polish Jews. Nazi anti-Semitism. During a meeting with Roman Catholic Bishop Wilhelm Berning of Osnabrück On April 26, 1933, Hitler declared: I have been attacked because of my handling of the Jewish question. The Catholic Church considered the Jews pestilent for fifteen hundred years, put them in ghettos, etc., because it recognized the Jews for what they were. In the epoch of liberalism, the danger was no longer recognized. I am moving back toward the time in which a fifteen-hundred-year-long tradition was implemented. I do not set race over religion, but I recognize the representatives of this race as pestilent for the state and for the Church, and perhaps I am thereby doing Christianity a great service by pushing them out of schools and public functions. The transcript of the discussion does not contain any response by Bishop Berning. Martin Rhonheimer does not consider this unusual because in his opinion, for a Catholic Bishop in 1933, there was nothing particularly objectionable "in this historically correct reminder". The Nazis used Martin Luther's book, "On the Jews and Their Lies" (1543), to justify their claim that their ideology was morally righteous. Luther seems to advocate the murder of Jews who refused to convert to Christianity by writing that "we are at fault in not slaying them." Archbishop Robert Runcie asserted that: "Without centuries of Christian anti-Semitism, Hitler's passionate hatred would never have been so fervently echoed... because for centuries Christians have held Jews collectively responsible for the death of Jesus. On Good Friday in times past, Jews have cowered behind locked doors with fear of a Christian mob seeking 'revenge' for deicide. Without the poisoning of Christian minds through the centuries, the Holocaust is unthinkable." The dissident Catholic priest Hans Küng has written that "Nazi anti-Judaism was the work of godless, anti-Christian criminals. But it would not have been possible without the almost two thousand years' pre-history of 'Christian' anti-Judaism..." The consensus among historians is that Nazism as a whole was either unrelated or actively opposed to Christianity, and Hitler was strongly critical of it, although Germany remained mostly Christian during the Nazi era. The document Dabru Emet was issued by over 220 rabbis and intellectuals from all branches of Judaism in 2000 as a statement about Jewish-Christian relations. This document states, Nazism was not a Christian phenomenon. Without the long history of Christian anti-Judaism and Christian violence against Jews, Nazi ideology could not have taken hold nor could it have been carried out. Too many Christians participated in, or were sympathetic to, Nazi atrocities against Jews. Other Christians did not protest sufficiently against these atrocities. But Nazism itself was not an inevitable outcome of Christianity. According to American historian Lucy Dawidowicz, anti-Semitism has a long history within Christianity. The line of "anti-Semitic descent" from Luther, the author of "On the Jews and Their Lies", to Hitler is "easy to draw." In her "The War Against the Jews, 1933–1945", she contends that Luther and Hitler were obsessed by the "demonologized universe" inhabited by Jews. Dawidowicz writes that the similarities between Luther's anti-Jewish writings and modern anti-Semitism are no coincidence because they derived from a common history of "Judenhass", which can be traced to Haman's advice to Ahasuerus. Although modern German anti-Semitism also has its roots in German nationalism and the liberal revolution of 1848, Christian anti-Semitism she writes is a foundation that was laid by the Roman Catholic Church and "upon which Luther built." Opposition to the Holocaust. The Confessing Church was, in 1934, the first Christian opposition group. The Catholic Church officially condemned the Nazi theory of racism in Germany in 1937 with the encyclical "Mit brennender Sorge", signed by Pope Pius XI, and Cardinal Michael von Faulhaber led the Catholic opposition, preaching against racism. Many individual Christian clergy and laypeople of all denominations had to pay for their opposition with their lives, including: By the 1940s, few Christians were willing to publicly oppose Nazi policy, but many Christians secretly helped save the lives of Jews. There are many sections of Israel's Holocaust Remembrance Museum, Yad Vashem, which are dedicated to honoring these "Righteous Among the Nations". Pope Pius XII. Before he became Pope, Cardinal Pacelli addressed the International Eucharistic Congress in Budapest on 25–30 May 1938 during which he referred to the Jews "whose lips curse [Christ] and whose hearts reject him even today"; at this time anti-Semitic laws were in the process of being formulated in Hungary. The 1937 encyclical "Mit brennender Sorge" was issued by Pope Pius XI, but it was drafted by the future Pope Pius XII and it was also read from the pulpits of all German Catholic churches, it condemned Nazi ideology and scholars have characterized it as the "first great official public document to dare to confront and criticize Nazism" and "one of the greatest such condemnations ever issued by the Vatican." In the summer of 1942, in the presence of his college of Cardinals, Pius explained the reasons for the great gulf that existed between Jews and Christians at the theological level: "Jerusalem has responded to His call and to His grace with the same rigid blindness and stubborn ingratitude that has led it along the path of guilt to the murder of God." Historian Guido Knopp describes these comments of Pius as being "incomprehensible" at a time when "Jerusalem was being murdered by the million". This traditional adversarial relationship with Judaism would be reversed in "Nostra aetate", which was issued during the Second Vatican Council starting from 1962, during the papacy of John XXIII. Prominent members of the Jewish community have contradicted the criticisms of Pius and they have also spoken highly about his efforts to protect Jews. The Israeli historian Pinchas Lapide interviewed war survivors and concluded that Pius XII "was instrumental in saving at least 700,000, but probably as many as 860,000 Jews from certain death at Nazi hands". Some historians dispute this estimate. "White Power" movement. The Christian Identity movement, the Ku Klux Klan, and other White supremacist groups have expressed anti-Semitic views. They claim that their anti-Semitism is based on purported Jewish control of the media, control of international banks, involvement in radical left-wing politics, and the Jews' promotion of multiculturalism, anti-Christian groups, liberalism and perverse organizations. They rebuke charges of racism by claiming that Jews who share their views maintain membership in their organizations. A racial belief that is common among these groups, but not universal among them, is an alternative history doctrine concerning the descendants of the Lost Tribes of Israel. In some of its forms, this doctrine absolutely denies the view that modern Jews have any ethnic connection to the Israel of the Bible. Instead, according to extreme forms of this doctrine, the true Israelites and the true humans are the members of the Adamic (white) race. These groups are often rejected and not considered Christian groups by mainstream Christian denominations and the majority of Christians around the world. Post World War II anti-Semitism. Anti-Semitism remains a substantial problem in Europe and to a greater or lesser degree, it also exists in many other nations, including Eastern Europe and the former Soviet Union, and tensions between some Muslim immigrants and Jews have increased across Europe. The US State Department reports that anti-Semitism has increased dramatically in Europe and Eurasia since 2000. While it has been on the decline since the 1940s, a measurable amount of anti-Semitism still exists in the United States, although acts of violence are rare. For example, the influential Evangelical preacher Billy Graham and the then-president Richard Nixon were caught on tape in the early 1970s while they were discussing matters like how to address the Jews' control of the American media. This belief in Jewish conspiracies and domination of the media was similar to those of Graham's former mentors: William Bell Riley chose Graham to succeed him as the second president of Northwestern Bible and Missionary Training School and evangelist Mordecai Ham led the meetings where Graham first believed in Christ. Both held strongly anti-Semitic views. The 2001 survey by the Anti-Defamation League (ADL), a Jewish group which devotes its efforts to the fight against anti-Semitism and other forms of racism, reported 1432 acts of anti-Semitism in the United States that year. The figure included 877 acts of harassment, including verbal intimidation, threats, and physical assaults. Many Christian Zionists are also accused of espousing anti-Semitism, such as John Hagee, who argued that the Jews brought the Holocaust upon themselves by angering God. Relations between Jews and Christians have dramatically improved since the 20th century. According to a global poll which was conducted in 2014 by the ADL, data was collected from 102 countries concerning their population's attitudes towards Jews and it revealed that only 24% of the world's Christians held views that were considered anti-Semitic according to the ADL's index, compared to 49% of the world's Muslims. Anti-Judaism. Many Christians do not consider anti-Judaism anti-Semitism. They regard anti-Judaism as a disagreement with the tenets of Judaism by religiously sincere people, while they regard anti-Semitism as an emotional bias or hatred which does not specifically target the religion of Judaism. Under this approach, anti-Judaism is not regarded as anti-Semitism because it does not involve actual hostility towards the Jewish people, instead, anti-Judaism only rejects the religious beliefs of Judaism. Others believe that anti-Judaism is the rejection of Judaism as a religion or opposition to Judaism's beliefs and practices "essentially because" of their source in Judaism or because a belief or practice is associated with the Jewish people. (But see supersessionism) Several scholars, including Susannah Heschel, Gavin I Langmuir and Uriel Tal hold the position that anti-Judaism directly led to modern anti-Semitism. Pope John Paul II in 'We Remember: A Reflection on the Shoah,' and the Jewish declaration on Christianity, Dabru Emet opinionated the position that "Christian theological anti-Judaism is a phenomenon which is distinct from modern anti-Semitism, which is rooted in economic and racial thought, so that Christian teachings should not be held responsible for anti-Semitism". Although some Christians did consider anti-Judaism to be contrary to Christian teaching in the past, this view was not widely expressed by Christian leaders and lay people. In many cases, the practical tolerance towards the Jewish religion and Jews prevailed. Some Christian groups condemned verbal anti-Judaism, particularly in their early years. Conversion of Jews. Some Jewish organizations have denounced evangelistic and missionary activities that specifically target Jews by labeling them anti-Semitic. The Southern Baptist Convention (SBC), the largest Protestant Christian denomination in the U.S., has explicitly rejected suggestions that it should back away from seeking to convert Jews, a position which critics have called anti-Semitic, but a position which Baptists believe is consistent with their view that salvation is solely found through faith in Christ. In 1996 the SBC approved a resolution calling for efforts to seek the conversion of Jews "as well as the salvation of 'every kindred and tongue and people and nation.'" Most Evangelicals agree with the SBC's position, and some of them also support efforts that specifically seek the Jews' conversion. Additionally, these Evangelical groups are among the most pro-Israel groups. ("For more information, see Christian Zionism".) One controversial group which has received a considerable amount of support from some Evangelical churches is Jews for Jesus, which claims that Jews can "complete" their Jewish faith by accepting Jesus as the Messiah. The Presbyterian Church (USA), the United Methodist Church, and the United Church of Canada have ended their efforts to convert Jews. While Anglicans do not, as a rule, seek converts from other Christian denominations, the General Synod has affirmed that "the good news of salvation in Jesus Christ is for all and must be shared with all including people from other faiths or of no faith and that to do anything else would be to institutionalize discrimination". The Roman Catholic Church formerly operated religious congregations that specifically aimed to convert Jews. Some of these congregations were founded by Jewish converts, like the Congregation of Our Lady of Sion, whose members were nuns and ordained priests. Many Catholic saints were specifically noted for their missionary zeal to convert Jews, such as Vincent Ferrer. After the Second Vatican Council, many missionary orders that aimed to convert Jews to Christianity no longer actively sought to missionize (or proselytize) them. However, Traditionalist Roman Catholic groups, congregations, and clergymen continue to advocate the missionizing of Jews according to traditional patterns, sometimes with success ("e.g.", the Society of St. Pius X which has notable Jewish converts among its faithful, many of whom have become traditionalist priests). The Church's Ministry Among Jewish People (CMJ) is one of the ten official mission agencies of the Church of England. The Society for Distributing Hebrew Scriptures is another organization, but it is not affiliated with the established Church. There are several prophecies concerning the conversion of the Jewish people to Christianity in the scriptures of the Church of Jesus Christ of Latter-day Saints (LDS). The Book of Mormon teaches that the Jewish people need to believe in Jesus to be gathered to Israel. The Doctrine & Covenants teaches that the Jewish people will be converted to Christianity during the second coming when Jesus appears to them and shows them his wounds. It teaches that if the Jewish people do not convert to Christianity, then the world would be cursed. Early LDS prophets, such as Brigham Young and Wildord Woodruff, taught that Jewish people could not be truly converted because of the curse which resulted from Jewish deicide. However, after the establishment of the state of Israel, many LDS members felt that it was time for the Jewish people to start converting to Mormonism. During the 1950s, the LDS Church established several missions that specifically targeted Jewish people in several cities in the United States. After the LDS church began to give the priesthood to all males regardless of race in 1978, it also started to deemphasize the importance of race concerning conversion. This led to a void of doctrinal teachings that resulted in a spectrum of views on how LDS members interpret scripture and previous teachings. According to research which was conducted by Armand Mauss, most LDS members believe that the Jewish people will need to be converted to Christianity to be forgiven for the crucifixion of Jesus Christ. The Church of Jesus Christ of Latter-day Saints has also been criticized for baptizing deceased Jewish Holocaust victims. In 1995, in part as a result of public pressure, church leaders promised to put new policies into place that would help the church to end the practice, unless it was specifically requested or approved by the surviving spouses, children or parents of the victims. However, the practice has continued, including the baptism of the parents of Holocaust survivor and Jewish rights advocate Simon Wiesenthal. Reconciliation between Judaism and Christian groups. In recent years, there has been much to note in the way of reconciliation between some Christian groups and the Jews.
6731
1216158
https://en.wikipedia.org/wiki?curid=6731
Boeing C-17 Globemaster III
The McDonnell Douglas/Boeing C-17 Globemaster III is a large military transport aircraft developed for the United States Air Force (USAF) during the 1980s and the early 1990s by McDonnell Douglas. The C-17 carries forward the name of two previous piston-engined military cargo aircraft, the Douglas C-74 Globemaster and the Douglas C-124 Globemaster II. The C-17 is based upon the YC-15, a smaller prototype airlifter designed during the 1970s. It was designed to replace the Lockheed C-141 Starlifter, and also fulfill some of the duties of the Lockheed C-5 Galaxy. The redesigned airlifter differs from the YC-15 in that it is larger and has swept wings and more powerful engines. Development was protracted by a series of design issues, causing the company to incur a loss of nearly US$1.5 billion on the program's development phase. On 15 September 1991, roughly one year behind schedule, the first C-17 performed its maiden flight. The C-17 formally entered USAF service on 17 January 1995. McDonnell Douglas and later Boeing after it merged with McDonnell Douglas in 1997, manufactured the C-17 for more than two decades. The final C-17 was completed at the Long Beach, California, plant and flown in November 2015. The C-17 commonly performs tactical and strategic airlift missions, transporting troops and cargo throughout the world; additional roles include medical evacuation and airdrop duties. The transport is in service with the USAF along with the air forces of India, the United Kingdom, Australia, Canada, Qatar, the United Arab Emirates, Kuwait, and the Europe-based multilateral organization Heavy Airlift Wing. The type played a key logistical role during both Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq, as well as in providing humanitarian aid in the aftermath of various natural disasters, including the 2010 Haiti earthquake, the 2011 Sindh floods and the 2023 Turkey-Syria earthquake. Development. Background and design phase. In the 1970s, the U.S. Air Force began looking for a replacement for its Lockheed C-130 Hercules tactical cargo aircraft. The Advanced Medium STOL Transport (AMST) competition was held, with Boeing proposing the YC-14, and McDonnell Douglas proposing the YC-15. Though both entrants exceeded specified requirements, the AMST competition was canceled before a winner was selected. The USAF started the C-X program in November 1979 to develop a larger AMST with longer range to augment its strategic airlift. By 1980, the USAF had a large fleet of aging C-141 Starlifter cargo aircraft. Compounding matters, increased strategic airlift capabilities were needed to fulfill its rapid-deployment airlift requirements. The USAF set mission requirements and released a request for proposals (RFP) for C-X in October 1980. McDonnell Douglas chose to develop a new aircraft based on the YC-15. Boeing bid an enlarged three-engine version of its AMST YC-14. Lockheed submitted both a C-5-based design and an enlarged C-141 design. On 28 August 1981, McDonnell Douglas was chosen to build its proposal, then designated "C-17". Compared to the YC-15, the new aircraft differed in having swept wings, increased size, and more powerful engines. This would allow it to perform the work done by the C-141, and to fulfill some of the duties of the Lockheed C-5 Galaxy, freeing the C-5 fleet for outsize cargo. Alternative proposals were pursued to fill airlift needs after the C-X contest. These were lengthening of C-141As into C-141Bs, ordering more C-5s, continued purchases of KC-10s, and expansion of the Civil Reserve Air Fleet. Limited budgets reduced program funding, requiring a delay of four years. During this time contracts were awarded for preliminary design work and for the completion of engine certification. In December 1985, a full-scale development contract was awarded, under Program Manager Bob Clepper. At this time, first flight was planned for 1990. The USAF had formed a requirement for 210 aircraft. Development problems and limited funding caused delays in the late 1980s. Criticisms were made of the developing aircraft and questions were raised about more cost-effective alternatives during this time. In April 1990, Secretary of Defense Dick Cheney reduced the order from 210 to 120 aircraft. The maiden flight of the C-17 took place on 15 September 1991 from the McDonnell Douglas's plant in Long Beach, California, about a year behind schedule. The first aircraft (T-1) and five more production models (P1-P5) participated in extensive flight testing and evaluation at Edwards Air Force Base. Two complete airframes were built for static and repeated load testing. Development difficulties. A static test of the C-17 wing in October 1992 resulted in its failure at 128% of design limit load, below the 150% requirement. Both wings buckled rear to the front and failures occurred in stringers, spars, and ribs. Some $100 million was spent to redesign the wing structure; the wing failed at 145% during a second test in September 1993. A review of the test data, however, showed that the wing was not loaded correctly and did indeed meet the requirement. The C-17 received the "Globemaster III" name in early 1993. In late 1993, the Department of Defense (DoD) gave the contractor two years to solve production issues and cost overruns or face the contract's termination after the delivery of the 40th aircraft. By accepting the 1993 terms, McDonnell Douglas incurred a loss of nearly US$1.5 billion on the program's development phase. In March 1994, the Non-Developmental Airlift Aircraft program was established to procure a transport aircraft using commercial practices as a possible alternative or supplement to the C-17. Initial material solutions considered included: buy a modified Boeing 747-400 NDAA, restart the C-5 production line, extend the C-141 service life, and continue C-17 production. The field eventually narrowed to: the Boeing 747-400 (provisionally named the C-33), the Lockheed Martin C-5D, and the McDonnell Douglas C-17. The NDAA program was initiated after the C-17 program was temporarily capped at a 40-aircraft buy (in December 1993) pending further evaluation of C-17 cost and performance and an assessment of commercial airlift alternatives. In April 1994, the program remained over budget and did not meet weight, fuel burn, payload, and range specifications. It failed several key criteria during airworthiness evaluation tests. Problems were found with the mission software, landing gear, and other areas. In May 1994, it was proposed to cut production to as few as 32 aircraft; these cuts were later rescinded. A July 1994 Government Accountability Office (GAO) report revealed that USAF and DoD studies from 1986 and 1991 stated the C-17 could use 6,400 more runways outside the U.S. than the C-5, but these studies had only considered runway dimensions, but not runway strength or load classification numbers (LCN). The C-5 has a lower LCN, but the USAF classifies both in the same broad load classification group. When considering runway dimensions and load ratings, the C-17's worldwide runway advantage over the C-5 shrank from 6,400 to 911 airfields. The report also stated "current military doctrine that does not reflect the use of small, austere airfields", thus the C-17's short field capability was not considered. A January 1995 GAO report stated that the USAF originally planned to order 210 C-17s at a cost of $41.8 billion, and that the 120 aircraft on order were to cost $39.5 billion based on a 1992 estimate. In March 1994, the U.S. Army decided it did not need the low-altitude parachute-extraction system delivery with the C-17 and that the C-130's capability was sufficient. C-17 testing was limited to this lower weight. Airflow issues prevented the C-17 from meeting airdrop requirements. A February 1997 GAO report revealed that a C-17 with a full payload could not land on wet runways; simulations suggested a distance of was required. The YC-15 was transferred to AMARC to be made flightworthy again for further flight tests for the C-17 program in March 1997. By September 1995, most of the prior issues were reportedly resolved and the C-17 was meeting all performance and reliability targets. The first USAF squadron was declared operational in January 1995. Production and deliveries. In 1996, the DoD ordered another 80 aircraft for a total of 120. In 1997, McDonnell Douglas merged with domestic competitor Boeing. In April 1999, Boeing offered to cut the C-17's unit price if the USAF bought 60 more; in August 2002, the order was increased to 180 aircraft. In 2007, 190 C-17s were on order for the USAF. On 6 February 2009, Boeing was awarded a $2.95 billion contract for 15 additional C-17s, increasing the total USAF fleet to 205 and extending production from August 2009 to August 2010. On 6 April 2009, U.S. Secretary of Defense Robert Gates stated that there would be no more C-17s ordered beyond the 205 planned. However, on 12 June 2009, the House Armed Services Air and Land Forces Subcommittee added a further 17 C-17s. Debate arose over follow-on C-17 orders, the USAF requested line shutdown while Congress called for further production. In FY2007, the USAF requested $1.6 billion (~$ in ) in response to "excessive combat use" on the C-17 fleet. In 2008, USAF General Arthur Lichte, Commander of Air Mobility Command, indicated before a House of Representatives subcommittee on air and land forces a need to extend production to another 15 aircraft to increase the total to 205, and that C-17 production may continue to satisfy airlift requirements. The USAF finally decided to cap its C-17 fleet at 223 aircraft; the final delivery was on 12 September 2013. In 2010, Boeing reduced the production rate to 10 aircraft per year from a high of 16 per year, due to dwindling orders and to extend the production line's life while additional orders were sought. The workforce was reduced by about 1,100 through 2012, a second shift at the Long Beach plant was also eliminated. By April 2011, 230 production C-17s had been delivered, including 210 to the USAF. The C-17 prototype "T-1" was retired in 2012 after use as a testbed by the USAF. In January 2010, the USAF announced the end of Boeing's performance-based logistics contracts to maintain the type. On 19 June 2012, the USAF ordered its 224th and final C-17 to replace one that crashed in Alaska in July 2010. In September 2013, Boeing announced that C-17 production was starting to close down. In October 2014, the main wing spar of the 279th and last aircraft was completed; this C-17 was delivered in 2015, after which Boeing closed the Long Beach plant. Production of spare components was to continue until at least 2017. The C-17 is projected to be in service for several decades. In February 2014, Boeing was engaged in sales talks with "five or six" countries for the remaining 15 C-17s; thus Boeing decided to build ten aircraft without confirmed buyers in anticipation of future purchases. In May 2015, "The Wall Street Journal" reported that Boeing expected to book a charge of under $100 million and cut 3,000 positions associated with the C-17 program, and also suggested that Airbus' lower cost A400M Atlas took international sales away from the C-17. In June 2025, it was announced that Boeing was in talks with an international customer to restart the production of C-17s, and that several other countries were interested in the prospect. There is speculation that the United States may be interested in buying new C-17s, as there is currently no replacement planned for existing C-17s or the aging C-5 Galaxy. Japanese Prime Minister Shigeru Ishiba stated that the Japanese Air Self-Defense Force would be interested in acquiring C-17s. Design. The C-17 Globemaster III is a strategic transport aircraft, able to airlift cargo close to a battle area. The size and weight of U.S. mechanized firepower and equipment have grown in recent decades from increased air mobility requirements, particularly for large or heavy non-palletized outsize cargo. It has a length of and a wingspan of , and uses about 8% composite materials, mostly in secondary structure and control surfaces. The aircraft features an anhedral wing configuration, providing pitch and roll stability to the aircraft. The aircraft's stability is furthered by its T-tail design, raising the center of pressure even higher above the center of mass. Drag is also lowered, as the horizontal stabilizer is far removed from the vortices generated by the two wings of the aircraft. The C-17 is powered by four Pratt & Whitney F117-PW-100 turbofan engines, which are based on the commercial Pratt & Whitney PW2040 used on the Boeing 757. Each engine is rated at of thrust. The engine's thrust reversers direct engine exhaust air upwards and forward, reducing the chances of foreign object damage by ingestion of runway debris, and providing enough reverse thrust to back up the aircraft while taxiing. The thrust reversers can also be used in flight at idle-reverse for added drag in maximum-rate descents. In vortex surfing tests performed by two C-17s, up to 10% fuel savings were reported. For cargo operations the C-17 requires a crew of three: pilot, copilot, and loadmaster. The cargo compartment is long by wide by high. The cargo floor has rollers for palletized cargo but it can be flipped to provide a flat floor suitable for vehicles and other rolling stock. Cargo is loaded through a large aft ramp that accommodates rolling stock, such as a 69-ton (63-metric ton) M1 Abrams main battle tank, other armored vehicles, trucks, and trailers, along with palletized cargo. Maximum payload of the C-17 is , and its maximum takeoff weight is . With a payload of and an initial cruise altitude of , the C-17 has an unrefueled range of about on the first 71 aircraft, and on all subsequent extended-range models that include a sealed center wing bay as a fuel tank. Boeing informally calls these aircraft the "C-17 ER". The C-17's cruise speed is about (Mach 0.74). It is designed to airdrop 102 paratroopers and their equipment. According to Boeing the maximum unloaded range is . The C-17 is designed to operate from runways as short as and as narrow as . The C-17 can also operate from unpaved, unimproved runways (although with a higher probability to damage the aircraft). The thrust reversers can be used to move the aircraft backwards and reverse direction on narrow taxiways using a three- (or more) point turn. The plane is designed for 20 man-hours of maintenance per flight hour, and a 74% mission availability rate. Operational history. United States Air Force. The first production C-17 was delivered to Charleston Air Force Base, South Carolina, on 14 July 1993. The first C-17 unit, the 17th Airlift Squadron, became operationally ready on 17 January 1995. It has broken 22 records for oversized payloads. The C-17 was awarded U.S. aviation's most prestigious award, the Collier Trophy, in 1994. A Congressional report on operations in Kosovo and Operation Allied Force noted "One of the great success stories...was the performance of the Air Force's C-17A" It flew half of the strategic airlift missions in the operation, the type could use small airfields, easing operations; rapid turnaround times also led to efficient utilization. C-17s delivered military supplies during Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq as well as humanitarian aid in the aftermath of the 2010 Haiti earthquake, and the 2011 Sindh floods, delivering thousands of food rations, tons of medical and emergency supplies. On 26 March 2003, 15 USAF C-17s participated in the biggest combat airdrop since the United States invasion of Panama in December 1989: the night-time airdrop of 1,000 paratroopers from the 173rd Airborne Brigade occurred over Bashur, Iraq. These airdrops were followed by C-17s ferrying M1 Abrams, M2 Bradleys, M113s and artillery. USAF C-17s have also assisted allies in their airlift needs, such as Canadian vehicles to Afghanistan in 2003 and Australian forces for the Australian-led military deployment to East Timor in 2006. In 2006, USAF C-17s flew 15 Canadian Leopard C2 tanks from Kyrgyzstan into Kandahar in support of NATO's Afghanistan mission. In 2013, five USAF C-17s supported French operations in Mali, operating with other nations' C-17s (RAF, NATO and RCAF deployed a single C-17 each). Flight crews have nicknamed the aircraft "the Moose", because during ground refueling, the pressure relief vents make a sound like the call of a female moose in heat. Since 1999, C-17s have flown annually to Antarctica on Operation Deep Freeze in support of the US Antarctic Research Program, replacing the C-141s used in prior years. The initial flight was flown by the USAF 62nd Airlift Wing. The C-17s fly round trip between Christchurch Airport and McMurdo Station around October each year and take 5 hours to fly each way. In 2006, the C-17 flew its first Antarctic airdrop mission, delivering 70,000 pounds of supplies. Further air drops occurred during subsequent years. A C-17 accompanies the President of the United States on his visits to both domestic and foreign arrangements, consultations, and meetings. It is used to transport the Presidential Limousine, Marine One, and security detachments. On several occasions, a C-17 has been used to transport the President himself, using the Air Force One call sign while doing so. Rapid Dragon missile launcher testing. In 2015, as part of a missile-defense test at Wake Island, simulated medium-range ballistic missiles were launched from C-17s against THAAD missile defense systems and the USS "John Paul Jones" (DDG-53). In early 2020, palletized munitions–"Combat Expendable Platforms"– were tested from C-17s and C-130Js with results the USAF considered positive. In 2021, the Air Force Research Laboratory further developed the concept into tests of the Rapid Dragon system, which transforms the C-17 into a lethal cruise missile arsenal ship capable of mass launching 45 JASSM-ER with 500 kg warheads from a standoff distance of . Anticipated improvements included support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when full production of the JASSM-XR was expected to deliver large inventories in 2024. Evacuation of Afghanistan. On 15 August 2021, USAF C-17 02-1109 from the 62nd Airlift Wing and 446th Airlift Wing at Joint Base Lewis-McChord departed Hamid Karzai International Airport in Kabul, Afghanistan, while crowds of people trying to escape the 2021 Taliban offensive ran alongside the aircraft. The C-17 lifted off with people holding on to the outside, and at least two died after falling from the aircraft. There were an unknown number possibly crushed and killed by the landing gear retracting, with human remains found in the landing-gear stowage. Also that day, C-17 01-0186 from the 816th Expeditionary Airlift Squadron at Al Udeid Air Base transported 823 Afghan citizens from Hamid Karzai International Airport on a single flight, setting a new record for the type, which was previously over 670 people during a 2013 typhoon evacuation from Tacloban, Philippines. Royal Air Force. On 13 January 2013, the RAF deployed two C-17s from RAF Brize Norton to the French Évreux Air Base, transporting French armored vehicles to the Malian capital of Bamako during the French intervention in Mali. In June 2015, an RAF C-17 was used to medically evacuate four victims of the 2015 Sousse attacks from Tunisia. On 13 September 2022, C-17 ZZ177 carried the body of Queen Elizabeth II from Edinburgh Airport to RAF Northolt in London. She had been lying in state at St Giles' Cathedral in Edinburgh, Scotland. Royal Canadian Air Force. The Canadian Armed Forces had a long-standing need for strategic airlift for military and humanitarian operations around the world. It had followed a pattern similar to the German Air Force in leasing Antonovs and Ilyushins for many requirements, including deploying the Disaster Assistance Response Team (DART) to tsunami-stricken Sri Lanka in 2005; the Canadian Forces had relied entirely on leased An-124 "Ruslan" for a Canadian Army deployment to Haiti in 2003. A combination of leased "Ruslans", Ilyushins and USAF C-17s was also used to move heavy equipment to Afghanistan. In 2002, the Canadian Forces Future Strategic Airlifter Project began to study alternatives, including long-term leasing arrangements. On 14 April 2010, a Canadian CC-177 landed for the first time at CFS Alert, the world's most northerly airport. Canadian Globemasters have been deployed in support of numerous missions worldwide, including Operation Hestia after the 2010 Haiti earthquake, providing airlift as part of Operation Mobile and support to the Canadian mission in Afghanistan. After Typhoon Haiyan hit the Philippines in 2013, CC-177s established an air bridge between the two nations, deploying Canada's DART and delivering humanitarian supplies and equipment. In 2014, they supported Operation Reassurance and Operation Impact. Strategic Airlift Capability program. At the 2006 Farnborough Airshow, a number of NATO member nations signed a letter of intent to jointly purchase and operate several C-17s within the Strategic Airlift Capability (SAC). The purchase was for two C-17s, and a third was contributed by the U.S. On 14 July 2009, Boeing delivered the first C-17 for the SAC program with the second and third C-17s delivered in September and October 2009. SAC members are Bulgaria, Estonia, Finland, Hungary, Lithuania, the Netherlands, Norway, Poland, Romania, Slovenia, Sweden and the U.S. as of 2024. The SAC C-17s are based at Pápa Air Base, Hungary. The Heavy Airlift Wing is hosted by Hungary, which acts as the flag nation. The aircraft are crewed in similar fashion as the NATO E-3 AWACS aircraft. The C-17 flight crew are multi-national, but each mission is assigned to an individual member nation based on the SAC's annual flight hour share agreement. The NATO Airlift Management Programme Office (NAMPO) provides management and support for the Heavy Airlift Wing. NAMPO is a part of the NATO Support Agency (NSPA). In September 2014, Boeing stated that the three C-17s supporting SAC missions had achieved a readiness rate of nearly 94 percent over the last five years and supported over 1,000 missions. Indian Air Force. The C-17 provides the IAF with strategic airlift, the ability to deploy special forces, and to operate in diverse terrain – from Himalayan air bases in North India at to Indian Ocean bases in South India. The C-17s are based at Hindon Air Force Station and are operated by No. 81 Squadron IAF "Skylords". The first C-17 was delivered in January 2013 for testing and training; it was officially accepted on 11 June 2013. The second C-17 was delivered on 23 July 2013 and put into service immediately. IAF Chief of Air Staff Norman AK Browne called it "a major component in the IAF's modernization drive" while taking delivery of the aircraft at Boeing's Long Beach factory. On 2 September 2013, the "Skylords" squadron with three C-17s officially entered IAF service. The "Skylords" regularly fly missions within India, such as to high-altitude bases at Leh and Thoise. The IAF first used the C-17 to transport an infantry battalion's equipment to Port Blair on Andaman Islands on 1 July 2013. Foreign deployments to date include Tajikistan in August 2013, and Rwanda to support Indian peacekeepers. One C-17 was used for transporting relief materials during Cyclone Phailin. The sixth aircraft was received in July 2014. In June 2017, the U.S. Department of State approved the potential sale of one C-17 to India under a proposed $366 million (~$ in ) U.S. Foreign Military Sale. This aircraft, the last C-17 produced, increased the IAF's fleet to 11 C-17s. In March 2018, a contract was awarded for completion by 22 August 2019. On 26 August 2019, Boeing delivered the 11th C-17 Globemaster III to the Indian Air Force. On 7 February 2023, an IAF C-17 delivered humanitarian aid packages for earthquake victims in Turkey and Syria by taking a detour around Pakistan's airspace in the aftermath of 2021 Taliban takeover of Afghanistan. An IAF C-17 executed a precision airdrop of two Combat Rubberised Raiding Craft along with a platoon of 8 MARCOS commandos in an operation to rescue the "ex-MV Ruen", a Maltese-flagged cargo ship hijacked by Somali pirates in December 2023. The mission was conducted on 16 March 2024 in a 10-hour round trip mission to an area 2600 km away from the Indian coast. The ship was being used as a mothership for piracy. In a joint operation carried out with the Indian Navy assets such as P-8I Neptune maritime patrol aircraft, SeaGuardian drones, destroyer "INS Kolkata" and patrol vessel "INS Subhadra", the IAF C-17 airdropped Navy's MARCOS commandos, who boarded the hijacked ship, rescued 17 sailors and disarmed 35 pirates in the operation. Qatar. Boeing delivered Qatar's first C-17 on 11 August 2009 and the second on 10 September 2009 for the Qatar Emiri Air Force. Qatar received its third C-17 in 2012, and fourth C-17 was received on 10 December 2012. In June 2013, "The New York Times" reported that Qatar was allegedly using its C-17s to ship weapons from Libya to the Syrian opposition during the civil war via Turkey. On 15 June 2015, it was announced at the Paris Airshow that Qatar agreed to order four additional C-17s from the five remaining "white tail" C-17s to double Qatar's C-17 fleet. One Qatari C-17 bears the civilian markings of government-owned Qatar Airways, although the airplane is owned and operated by the Qatar Emiri Air Force. The head of Qatar's airlift selection committee, Ahmed Al-Malki, said the paint scheme was "to build awareness of Qatar's participation in operations around the world."
6734
39405117
https://en.wikipedia.org/wiki?curid=6734
Garbage collection (computer science)
In computer science, garbage collection (GC) is a form of automatic memory management. The "garbage collector" attempts to reclaim memory that was allocated by the program, but is no longer referenced; such memory is called "garbage". Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp. Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affect performance as a result. Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods (e.g. destructors). Some such methods de-allocate memory also. Overview. Many programming languages require garbage collection, either as part of the language specification (e.g., RPL, Java, C#, D, Go, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus). These are said to be "garbage-collected languages". Other languages, such as C and C++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, like Ada, Modula-3, and C++/CLI, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects. Still others, like D, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required. Although many languages integrate GC into their compiler and runtime system, "post-hoc" GC systems also exist, such as Automatic Reference Counting (ARC). Some of these "post-hoc" GC systems do not require recompilation. Advantages. GC frees the programmer from manually de-allocating memory. This helps avoid some kinds of errors: Disadvantages. GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code is overhead, which can impair program performance. A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealized explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using an oracle, implemented by collecting traces from programs run under a profiler, and the program is only correct for one particular execution of the program. Interaction with memory hierarchy effects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection in iOS, despite it being the most desired feature. The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout a session. Unpredictable stalls can be unacceptable in real-time environments, in transaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs. Strategies. Tracing. Tracing garbage collection is the most common type of garbage collection, so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such as reference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects are "reachable" by a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics. Reference counting. Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed. As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either in CPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache and virtual memory operation. There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms: Escape analysis. Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs. Availability. Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages lacking built-in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++. Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection. Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, ooRexx, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors. BASIC. BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in formula_1 performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.SYSTEM, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster. Objective-C. While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector. However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbade the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC. Limited environments. Garbage collection is rarely used on embedded or real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed. The Microsoft .NET Micro Framework, .NET nanoFramework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection. Java. Garbage collectors available in Java OpenJDKs virtual machine (JVM) include: Compile-time use. Compile-time garbage collection is a form of static analysis allowing memory to be reused and reclaimed based on invariants known during compilation. This form of garbage collection has been studied in the Mercury programming language, and it saw greater usage with the introduction of LLVM's automatic reference counter (ARC) into Apple's ecosystem (iOS and OS X) in 2011. Real-time systems. Incremental, concurrent, and real-time garbage collectors have been developed, for example by Henry Baker and by Henry Lieberman. In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects. Generational garbage collection schemes are based on the empirical observation that most objects die young. In generational garbage collection, two or more allocation regions (generations) are kept, which are kept separate based on the object's age. New objects are created in the "young" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed. Some high-level language computer architectures include hardware support for real-time garbage collection. Most implementations of real-time garbage collectors use tracing. Such real-time garbage collectors meet hard real-time constraints when used with a real-time operating system.
6736
33145
https://en.wikipedia.org/wiki?curid=6736
Canidae
Canidae (; from Latin, "canis", "dog") is a biological family of caniform carnivorans, constituting a clade. A member of this family is a canid (). The family includes three subfamilies: the Caninae, and the extinct Borophaginae and Hesperocyoninae. The Caninae are the canines, and include domestic dogs, wolves, coyotes, raccoon dogs, foxes, jackals and other species. Canids are found on all continents except Antarctica, having arrived independently or accompanied by human beings over extended periods of time. Canids vary in size from the gray wolf to the fennec fox. The body forms of canids are similar, typically having long muzzles, upright ears, teeth adapted for cracking bones and slicing flesh, long legs, and bushy tails. They are mostly social animals, living together in family units or small groups and behaving co-operatively. Typically, only the dominant pair in a group breeds and a litter of young are reared annually in an underground den. Canids communicate by scent signals and vocalizations. One canid, the domestic dog, originated from a symbiotic relationship with Upper Paleolithic humans and is one of the most widely kept domestic animals. Taxonomy. In the history of the carnivores, the family Canidae is represented by the two extinct subfamilies designated as Hesperocyoninae and Borophaginae, and the extant subfamily Caninae. This subfamily includes all living canids and their most recent fossil relatives. All living canids as a group form a dental monophyletic relationship with the extinct borophagines, with both groups having a bicuspid (two points) on the lower carnassial talonid, which gives this tooth an additional ability in mastication. This, together with the development of a distinct entoconid cusp and the broadening of the talonid of the first lower molar, and the corresponding enlargement of the talon of the upper first molar and reduction of its parastyle distinguish these late Cenozoic canids and are the essential differences that identify their clade. The cat-like Feliformia and dog-like Caniformia emerged within the Carnivoramorpha around 45–42 Mya (million years ago). The Canidae first appeared in North America during the Late Eocene (37.8-33.9 Mya). They did not reach Eurasia until the Late Miocene or to South America until the Late Pliocene. Phylogenetic relationships. This cladogram shows the phylogenetic position of canids within Caniformia, based on fossil finds: Evolution. The Canidae are a diverse group of some 37 species ranging in size from the maned wolf with its long limbs to the short-legged bush dog. Modern canids inhabit forests, tundra, savannas, and deserts throughout tropical and temperate parts of the world. The evolutionary relationships between the species have been studied in the past using morphological approaches, but more recently, molecular studies have enabled the investigation of phylogenetics relationships. In some species, genetic divergence has been suppressed by the high level of gene flow between different populations and where the species have hybridized, large hybrid zones exist. Eocene epoch. Carnivorans evolved after the extinction of the non-avian dinosaurs 66 million years ago. Around 50 million years ago, or earlier, in the Paleocene, the Carnivora split into two main divisions: caniform (dog-like) and feliform (cat-like). By 40 Mya, the first identifiable member of the dog family had arisen. Named "Prohesperocyon wilsoni", its fossils have been found in southwest Texas. The chief features which identify it as a canid include the loss of the upper third molar (part of a trend toward a more shearing bite), and the structure of the middle ear which has an enlarged bulla (the hollow bony structure protecting the delicate parts of the ear). "Prohesperocyon" probably had slightly longer limbs than its predecessors, and also had parallel and closely touching toes which differ markedly from the splayed arrangements of the digits in bears. Canidae soon divided into three subfamilies, each of which diverged during the Eocene: Hesperocyoninae (about 39.74–15 Mya), Borophaginae (about 34–32 Mya), and Caninae (about 34–30 Mya; the only surviving subfamily). Members of each subfamily showed an increase in body mass with time and some exhibited specialized hypercarnivorous diets that made them prone to extinction. Oligocene epoch. By the Oligocene, all three subfamilies (Hesperocyoninae, Borophaginae, and Caninae) had appeared in the fossil record of North America. The earliest and most primitive branch of the Canidae was Hesperocyoninae, which included the coyote-sized "Mesocyon" of the Oligocene (38–24 Mya). These early canids probably evolved for the fast pursuit of prey in a grassland habitat; they resembled modern viverrids in appearance. Hesperocyonines eventually became extinct in the middle Miocene. One of the early Hesperocyonines, the genus "Hesperocyon", gave rise to "Archaeocyon" and "Leptocyon". These branches led to the borophagine and canine radiations. Miocene epoch. Around 8 Mya, the Beringian land bridge allowed members of the genus "Eucyon" a means to enter Asia from North America and they continued on to colonize Europe. Pliocene epoch. The "Canis", "Urocyon", and "Vulpes" genera developed from canids from North America, where the canine radiation began. The success of these canids was related to the development of lower carnassials that were capable of both mastication and shearing. Around 5 million years ago, some of the Old World "Eucyon" evolved into the first members of "Canis", In the Pliocene, around 4–5 Mya, "Canis lepophagus" appeared in North America. This was small and sometimes coyote-like. Others were wolf-like. "C. latrans" (the coyote) is theorized to descend from "C. lepophagus". The formation of the Isthmus of Panama, about 3 Mya, joined South America to North America, allowing canids to invade South America, where they diversified. However, the last common ancestor of the South American canids lived in North America some 4 Mya and more than one incursion across the new land bridge is likely given the fact that more than one lineage is present in South America. Two North American lineages found in South America are the gray fox ("Urocyon cinereoargentus") and the now-extinct dire wolf ("Aenocyon dirus"). Besides these, there are species endemic to South America: the maned wolf ("Chrysocyon brachyurus"), the short-eared dog ("Atelocynus microtis"), the bush dog ("Speothos venaticus"), the crab-eating fox ("Cerdocyon thous"), and the South American foxes ("Lycalopex" spp.). The monophyly of this group has been established by molecular means. Pleistocene epoch. During the Pleistocene, the North American wolf line appeared, with "Canis edwardii", clearly identifiable as a wolf, and "Canis rufus" appeared, possibly a direct descendant of "C. edwardii". Around 0.8 Mya, "Canis ambrusteri" emerged in North America. A large wolf, it was found all over North and Central America and was eventually supplanted by the dire wolf, which then spread into South America during the Late Pleistocene. By 0.3 Mya, a number of subspecies of the gray wolf ("C. lupus") had developed and had spread throughout Europe and northern Asia. The gray wolf colonized North America during the late Rancholabrean era across the Bering land bridge, with at least three separate invasions, with each one consisting of one or more different Eurasian gray wolf clades. MtDNA studies have shown that there are at least four extant "C. lupus" lineages. The dire wolf shared its habitat with the gray wolf, but became extinct in a large-scale extinction event that occurred around 11,500 years ago. It may have been more of a scavenger than a hunter; its molars appear to be adapted for crushing bones and it may have gone extinct as a result of the extinction of the large herbivorous animals on whose carcasses it relied. In 2015, a study of mitochondrial genome sequences and whole-genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonized Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. When comparing the African and Eurasian golden jackals, the study concluded that the African specimens represented a distinct monophyletic lineage that should be recognized as a separate species, "Canis anthus" (African golden wolf). According to a phylogeny derived from nuclear sequences, the Eurasian golden jackal ("Canis aureus") diverged from the wolf/coyote lineage 1.9 Mya, but the African golden wolf separated 1.3 Mya. Mitochondrial genome sequences indicated the Ethiopian wolf diverged from the wolf/coyote lineage slightly prior to that. Wild canids are native to all continents except Australasia and Antarctica, and also occur as feral (human-introduced) in New Guinea and Australia. They inhabit a wide range of different habitats, including deserts, mountains, forests, and grasslands. They vary in size from the fennec fox, which may be as little as in length and weigh , to the gray wolf, which may be up to long, and can weigh up to . Only a few species are arboreal—the gray fox, the closely related island fox and the raccoon dog habitually climb trees. All canids have a similar basic form, as exemplified by the gray wolf, although the relative length of muzzle, limbs, ears, and tail vary considerably between species. With the exceptions of the bush dog, the raccoon dog and some domestic dog breeds, canids have relatively long legs and lithe bodies, adapted for chasing prey. The tails are bushy and the length and quality of the pelage vary with the season. The muzzle portion of the skull is much more elongated than that of the cat family. The zygomatic arches are wide, there is a transverse lambdoidal ridge at the rear of the cranium and in some species, a sagittal crest running from front to back. The bony orbits around the eye never form a complete ring and the auditory bullae are smooth and rounded. Females have three to seven pairs of mammae. All canids are digitigrade, meaning they walk on their toes. The tip of the nose is always naked, as are the cushioned pads on the soles of the feet. These latter consist of a single pad behind the tip of each toe and a more-or-less three-lobed central pad under the roots of the digits. Hairs grow between the pads and in the Arctic fox the sole of the foot is densely covered with hair at some times of the year. With the exception of the four-toed African wild dog ("Lycaon pictus"), five toes are on the forefeet, but the pollex (thumb) is reduced and does not reach the ground. On the hind feet are four toes, but in some domestic dogs, a fifth vestigial toe, known as a dewclaw, is sometimes present, but has no anatomical connection to the rest of the foot. In some species, slightly curved nails are non-retractile and more-or-less blunt while other species have sharper, partially-retractile claws. The canine penis contains a baculum and a structure called the bulbus glandis that expands during copulation, forming a copulatory tie that lasts for up to an hour. Young canids are born blind, with their eyes opening a few weeks after birth. All living canids (Caninae) have a ligament analogous to the nuchal ligament of ungulates used to maintain the posture of the head and neck with little active muscle exertion; this ligament allows them to conserve energy while running long distances following scent trails with their nose to the ground. However, based on skeletal details of the neck, at least some of the Borophaginae (such as "Aelurodon") are believed to have lacked this ligament. Dentition. Dentition relates to the arrangement of teeth in the mouth, with the dental notation for the upper-jaw teeth using the upper-case letters I to denote incisors, C for canines, P for premolars, and M for molars, and the lower-case letters i, c, p and m to denote the mandible teeth. Teeth are numbered using one side of the mouth and from the front of the mouth to the back. In carnivores, the upper premolar P4 and the lower molar m1 form the carnassials that are used together in a scissor-like action to shear the muscle and tendon of prey. Canids use their premolars for cutting and crushing except for the upper fourth premolar P4 (the upper carnassial) that is only used for cutting. They use their molars for grinding except for the lower first molar m1 (the lower carnassial) that has evolved for both cutting and grinding depending on the canid's dietary adaptation. On the lower carnassial, the trigonid is used for slicing and the talonid is used for grinding. The ratio between the trigonid and the talonid indicates a carnivore's dietary habits, with a larger trigonid indicating a hypercarnivore and a larger talonid indicating a more omnivorous diet. Because of its low variability, the length of the lower carnassial is used to provide an estimate of a carnivore's body size. A study of the estimated bite force at the canine teeth of a large sample of living and fossil mammalian predators, when adjusted for their body mass, found that for placental mammals the bite force at the canines was greatest in the extinct dire wolf (163), followed among the modern canids by the four hypercarnivores that often prey on animals larger than themselves: the African wild dog (142), the gray wolf (136), the dhole (112), and the dingo (108). The bite force at the carnassials showed a similar trend to the canines. A predator's largest prey size is strongly influenced by its biomechanical limits. Most canids have 42 teeth, with a dental formula of: . The bush dog has only one upper molar with two below, the dhole has two above and two below. and the bat-eared fox has three or four upper molars and four lower ones. The molar teeth are strong in most species, allowing the animals to crack open bone to reach the marrow. The deciduous, or baby teeth, formula in canids is , molars being completely absent. Life history. Social behavior. Almost all canids are social animals and live together in groups. In general, they are territorial or have a home range and sleep in the open, using their dens only for breeding and sometimes in bad weather. In most foxes, and in many of the true dogs, a male and female pair work together to hunt and to raise their young. Gray wolves and some of the other larger canids live in larger groups called packs. African wild dogs have packs which may consist of 20 to 40 animals and packs of fewer than about seven individuals may be incapable of successful reproduction. Hunting in packs has the advantage that larger prey items can be tackled. Some species form packs or live in small family groups depending on the circumstances, including the type of available food. In most species, some individuals live on their own. Within a canid pack, there is a system of dominance so that the strongest, most experienced animals lead the pack. In most cases, the dominant male and female are the only pack members to breed. Communication. Canids communicate with each other by scent signals, by visual clues and gestures, and by vocalizations such as growls, barks, and howls. In most cases, groups have a home territory from which they drive out other conspecifics. Canids use urine scent marks to mark their food caches or warn trespassing individuals. Social behavior is also mediated by secretions from glands on the upper surface of the tail near its root and from the anal glands, preputial glands, and supracaudal glands. Reproduction. Canids as a group exhibit several reproductive traits that are uncommon among mammals as a whole. They are typically monogamous, provide paternal care to their offspring, have reproductive cycles with lengthy proestral and dioestral phases and have a copulatory tie during mating. They also retain adult offspring in the social group, suppressing the ability of these to breed while making use of the alloparental care they can provide to help raise the next generation. Most canid species are spontaneous ovulators, though maned wolves are induced ovulators. During the proestral period, increased levels of estradiol make the female attractive to the male. There is a rise in progesterone during the estral phase when female is receptive. Following this, the level of estradiol fluctuates and there is a lengthy dioestrous phase during which the female is pregnant. Pseudo-pregnancy often occurs in canids that have ovulated but failed to conceive. A period of anestrus follows pregnancy or pseudo-pregnancy, there being only one oestral period during each breeding season. Small and medium-sized canids mostly have a gestation of 50 to 60 days, while larger species average 60 to 65 days. The time of year in which the breeding season occurs is related to the length of day, as has been shown for several species that have been moved across the equator and experiences a six-month shift of phase. Domestic dogs and certain small canids in captivity may come into oestrus more often, perhaps because the photoperiod stimulus breaks down under conditions of artificial lighting. Canids have an oestrus period of 1 to 20 days, lasting one week in most species. The size of a litter varies, with from one to 16 or more pups being born. The young are born small, blind and helpless and require a long period of parental care. They are kept in a den, most often dug into the ground, for warmth and protection. When the young begin eating solid food, both parents, and often other pack members, bring food back for them from the hunt. This is most often vomited up from the adult's stomach. Where such pack involvement in the feeding of the litter occurs, the breeding success rate is higher than is the case where females split from the group and rear their pups in isolation. Young canids may take a year to mature and learn the skills they need to survive. In some species, such as the African wild dog, male offspring usually remain in the natal pack, while females disperse as a group and join another small group of the opposite sex to form a new pack. Canids and humans. One canid, the domestic dog, entered into a partnership with humans a long time ago. The dog was the first domesticated species. The archaeological record shows the first undisputed dog remains buried beside humans 14,700 years ago, with disputed remains occurring 36,000 years ago. These dates imply that the earliest dogs arose in the time of human hunter-gatherers and not agriculturists. The fact that wolves are pack animals with cooperative social structures may have been the reason that the relationship developed. Humans benefited from the canid's loyalty, cooperation, teamwork, alertness and tracking abilities, while the wolf may have benefited from the use of weapons to tackle larger prey and the sharing of food. Humans and dogs may have evolved together. Among canids, only the gray wolf has widely been known to prey on humans. Nonetheless, at least two records of coyotes killing humans have been published, and at least two other reports of golden jackals killing children. Human beings have trapped and hunted some canid species for their fur and some, especially the gray wolf, the coyote and the red fox, for sport. Canids such as the dhole are now endangered in the wild because of persecution, habitat loss, a depletion of ungulate prey species and transmission of diseases from domestic dogs.
6739
7903804
https://en.wikipedia.org/wiki?curid=6739
Subspecies of Canis lupus
There are 38 subspecies of "Canis lupus" listed in the taxonomic authority "Mammal Species of the World" (2005, 3rd edition). These subspecies were named over the past 250 years, and since their naming, a number of them have gone extinct. The nominate subspecies is the Eurasian wolf ("Canis lupus lupus"). Taxonomy. In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his "Systema Naturae" the binomial nomenclature – or the two-word naming – of species. "Canis" is the Latin word meaning "dog", and under this genus he listed the dog-like carnivores including domestic dogs, wolves, and jackals. He classified the domestic dog as "Canis familiaris", and on the next page he classified the wolf as "Canis lupus". Linnaeus considered the dog to be a separate species from the wolf because of its head, body, and "cauda recurvata" – its upturning tail – which is not found in any other canid. In 1999, a study of mitochondrial DNA indicated that the domestic dog may have originated from multiple wolf populations, with the dingo and New Guinea singing dog "breeds" having developed at a time when human populations were more isolated from each other. In the third edition of "Mammal Species of the World" published in 2005, the mammalogist W. Christopher Wozencraft listed under the wolf "Canis lupus" some 36 wild subspecies, and proposed two additional subspecies: "familiaris" Linnaeus, 1758 and "dingo" Meyer, 1793. Wozencraft included "hallstromi" – the New Guinea singing dog – as a taxonomic synonym for the dingo. Wozencraft referred to the mDNA study as one of the guides in forming his decision, and listed the 38 subspecies under the biological common name of "wolf", with the nominate subspecies being the Eurasian wolf ("Canis lupus lupus") based on the type specimen that Linnaeus studied in Sweden. However, the classification of several of these canines as either species or subspecies has recently been challenged. List of extant subspecies. Living subspecies recognized by "MSW3" and divided into Old World and New World: Eurasia and Australasia. Sokolov and Rossolimo (1985) recognised nine Old World subspecies of wolf. These were "C. l. lupus", "C. l. albus", "C. l. pallipes", "C. l. cubanenesis", "C. l. campestris", "C. l. chanco", "C. l. desertorum", "C. l. hattai", and "C. l. hodophilax". In his 1995 statistical analysis of skull morphometrics, mammalogist Robert Nowak recognized the first four of those subspecies, synonymized "campestris", "chanco" and "desertorum" with "C. l. lupus", but did not examine the two Japanese subspecies. In addition, he recognized "C. l. communis" as a subspecies distinct from "C. l. lupus". In 2003, Nowak also recognized the distinctiveness of "C. l.", "C. l. hattai", "C. l. italicus", and "C. l. hodophilax". In 2005, "MSW3" included "C. l. filchneri". In 2003, two forms were distinguished in southern China and Inner Mongolia as being separate from "C. l. chanco" and "C. l. filchneri" and have yet to be named. North America. For North America, in 1944 the zoologist Edward Goldman recognized as many as 23 subspecies based on morphology. In 1959, E. Raymond Hall proposed that there had been 24 subspecies of "lupus" in North America. In 1970, L. David Mech proposed that there was "probably far too many subspecific designations...in use", as most did not exhibit enough points of differentiation to be classified as separate subspecies. The 24 subspecies were accepted by many authorities in 1981 and these were based on morphological or geographical differences, or a unique history. In 1995, the American mammalogist Robert M. Nowak analyzed data on the skull morphology of wolf specimens from around the world. For North America, he proposed that there were only five subspecies of the wolf. These include a large-toothed Arctic wolf named "C. l. arctos", a large wolf from Alaska and western Canada named "C. l. occidentalis", a small wolf from southeastern Canada named "C. l. lycaon", a small wolf from the southwestern U.S. named "C. l. baileyi" and a moderate-sized wolf that was originally found from Texas to Hudson Bay and from Oregon to Newfoundland named "C. l. nubilus". The taxonomic classification of "Canis lupus" in "Mammal Species of the World" (3rd edition, 2005) listed 27 subspecies of North American wolf, corresponding to the 24 "Canis lupus" subspecies and the three "Canis rufus" subspecies of Hall (1981). The table below shows the extant subspecies, with the extinct ones listed in the following section. List of extinct subspecies. Subspecies recognized by "MSW3" which have gone extinct over the past 150 years: Subspecies discovered since the publishing of "MSW3" in 2005 which have gone extinct over the past 150 years: Disputed subspecies. Global. In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral dogs ("Canis familiaris"). In 2020, a literature review of canid domestication stated that modern dogs were not descended from the same "Canis" lineage as modern wolves, and proposed that dogs may be descended from a Pleistocene wolf closer in size to a village dog. In 2021, the American Society of Mammalogists also considered dingos a feral dog ("Canis familiaris") population. Eurasia. Italian wolf. The Italian wolf (or Apennine wolf) was first recognised as a distinct subspecies ("Canis lupus italicus") in 1921 by zoologist Giuseppe Altobello. Altobello's classification was later rejected by several authors, including Reginald Innes Pocock, who synonymised "C. l. italicus" with "C. l. lupus". In 2002, the noted paleontologist R.M. Nowak reaffirmed the morphological distinctiveness of the Italian wolf and recommended the recognition of "Canis lupus italicus". A number of DNA studies have found the Italian wolf to be genetically distinct. In 2004, the genetic distinction of the Italian wolf subspecies was supported by analysis which consistently assigned all the wolf genotypes of a sample in Italy to a single group. This population also showed a unique mitochondrial DNA control-region haplotype, the absence of private alleles and lower heterozygosity at microsatellite loci, as compared to other wolf populations. In 2010, a genetic analysis indicated that a single wolf haplotype (w22) unique to the Apennine Peninsula and one of the two haplotypes (w24, w25), unique to the Iberian Peninsula, belonged to the same haplogroup as the prehistoric wolves of Europe. Another haplotype (w10) was found to be common to the Iberian peninsula and the Balkans. These three populations with geographic isolation exhibited a near lack of gene flow and spatially correspond to three glacial refugia. The taxonomic reference "Mammal Species of the World" (3rd edition, 2005) does not recognize "Canis lupus italicus"; however, NCBI/Genbank publishes research papers under that name. Iberian wolf. The Iberian wolf was first recognised as a distinct subspecies ("Canis lupus signatus") in 1907 by zoologist Ángel Cabrera. The wolves of the Iberian peninsula have morphologically distinct features from other Eurasian wolves and each are considered by their researchers to represent their own subspecies. The taxonomic reference "Mammal Species of the World" (3rd edition, 2005) does not recognize "Canis lupus signatus"; however, NCBI/Genbank does list it. Himalayan wolf. The Himalayan wolf is distinguished by its mitochondrial DNA, which is basal to all other wolves. The taxonomic name of this wolf is disputed, with the species "Canis himalayensis" being proposed based on two limited DNA studies. In 2017, a study of mitochondrial DNA, X-chromosome (maternal lineage) markers and Y-chromosome (male lineage) markers found that the Himalayan wolf was genetically basal to the Holarctic grey wolf and has an association with the African golden wolf. In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group noted that the Himalayan wolf's distribution included the Himalayan range and the Tibetan Plateau. The group recommends that this wolf lineage be known as the "Himalayan wolf" and classified as "Canis lupus chanco" until a genetic analysis of the holotypes is available. In 2020, further research on the Himalayan wolf found that it warranted species-level recognition under the Unified Species Concept, the Differential Fitness Species Concept, and the Biological Species Concept. It was identified as an Evolutionary Significant Unit that warranted assignment onto the IUCN Red List for its protection. Indian plains wolf. The Indian plains wolf is a proposed clade within the Indian wolf ("Canis lupus pallipes") that is distinguished by its mitochondrial DNA, which is basal to all other wolves except for the Himalayan wolf. The taxonomic status of this wolf clade is disputed, with the separate species "Canis indica" being proposed based on two limited DNA studies. The proposal has not been endorsed because it relied on a limited number of museum and zoo samples that may not have been representative of the wild population, and a call for further fieldwork has been made. The taxonomic reference "Mammal Species of the World" (3rd edition, 2005) does not recognize "Canis indica"; however, NCBI/Genbank lists it as a new subspecies, "Canis lupus indica". Southern Chinese wolf. In 2017, a comprehensive study found that the gray wolf was present across all of mainland China, both in the past and today. It exists in southern China, which refutes claims made by some researchers in the Western world that the wolf had never existed in southern China. This wolf has not been taxonomically classified. In 2019, a genomic study on the wolves of China included museum specimens of wolves from southern China that were collected between 1963 and 1988. The wolves in the study formed three clades: northern Asian wolves that included those from northern China and eastern Russia, Himalayan wolves from the Tibetan Plateau, and a unique population from southern China. One specimen from Zhejiang Province in eastern China shared gene flow with the wolves from southern China; however, its genome was 12–14 percent admixed with a canid that may be the dhole or an unknown canid that predates the genetic divergence of the dhole. The wolf population from southern China is believed to still exist in that region. North America. Coastal wolves. A study of the three coastal wolves indicates a close phylogenetic relationship across regions that are geographically and ecologically contiguous, and the study proposed that "Canis lupus ligoni" (the Alexander Archipelago wolf), "Canis lupus columbianus" (the British Columbian wolf), and "Canis lupus crassodon" (the Vancouver Coastal Sea wolf) should be recognized as a single subspecies of "Canis lupus", synonymized as "Canis lupus crassodon". They share the same habitat and prey species, and form one study's six identified North American ecotypes – a genetically and ecologically distinct population separated from other populations by their different types of habitat. Eastern wolf. The eastern wolf has two proposals over its origin. One is that the eastern wolf is a distinct species ("C. lycaon") that evolved in North America, as opposed to the gray wolf that evolved in the Old World, and is related to the red wolf. The other is that it is derived from admixture between gray wolves, which inhabited the Great Lakes area and coyotes, forming a hybrid that was classified as a distinct species by mistake. The taxonomic reference "Mammal Species of the World" (3rd edition, 2005) does not recognize "Canis lycaon"; however, NCBI/Genbank does list it. In 2021, the American Society of Mammalogists also considered "Canis lycaon" a valid species. Red wolf. The red wolf is an enigmatic taxon, of which there are two proposals over its origin. One is that the red wolf is a distinct species ("C. rufus") that has undergone human-influenced admixture with coyotes. The other is that it was never a distinct species but was derived from past admixture between coyotes and gray wolves, due to the gray wolf population being eliminated by humans. The taxonomic reference "Mammal Species of the World" (3rd edition, 2005) does not recognize "Canis rufus"; however, NCBI/Genbank does list it. In 2021, the American Society of Mammalogists also considered "Canis rufus" a valid species.
6742
7903804
https://en.wikipedia.org/wiki?curid=6742
Central Asia
Central Asia is a region of Asia consisting of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. The countries as a group are also colloquially referred to as the "-stans" as all have names ending with the Persian suffix "-stan" (meaning ) in both respective native languages and most other languages. The region is bounded by the Caspian Sea to the southwest, European Russia to the northwest, China and Mongolia to the east, Afghanistan and Iran to the south, and Siberia to the north. Together, the five Central Asian countries have a total population of around million. In the pre-Islamic and early Islamic eras ( and earlier) Central Asia was inhabited predominantly by Iranian peoples, populated by Eastern Iranian-speaking Bactrians, Sogdians, Chorasmians, and the semi-nomadic Scythians and Dahae. As the result of Turkic migration, Central Asia also became the homeland for the Kazakhs, Kyrgyzs, Tatars, Turkmens, Uyghurs, and Uzbeks; Turkic languages largely replaced the Iranian languages spoken in the area, with the exception of Tajikistan and areas where Tajik is spoken. The Silk Road trade routes crossed through Central Asia, leading to the rise of prosperous trade cities. acting as a crossroads for the movement of people, goods, and ideas between Europe and the Far East. Most countries in Central Asia are still integral to parts of the world economy. From the mid-19th century until near the end of the 20th century, Central Asia was colonised by the Russians, and incorporated into the Russian Empire, and later the Soviet Union, which led to Russians and other Slavs migrating into the area. Modern-day Central Asia is home to a large population of descendants of European settlers, who mostly live in Kazakhstan: 7 million Russians, 500,000 Ukrainians, and about 170,000 Germans. During the Stalinist period, the forced deportation of Koreans in the Soviet Union resulted in a population of over 300,000 Koreans in the region. Definitions. One of the first geographers to mention Central Asia as a distinct region of the world was Alexander von Humboldt. The borders of Central Asia are subject to multiple definitions. Historically, political geography and culture have been two significant parameters widely used in scholarly definitions of Central Asia. Humboldt's definition comprised every country between 5° North and 5° South of the latitude 44.5°N. Humboldt mentions some geographic features of this region, which include the Caspian Sea in the west, the Altai mountains in the north and the Hindu Kush and Pamir mountains in the South. He did not give an eastern border for the region. His legacy is still seen: Humboldt University of Berlin, named after him, offers a course in Central Asian studies. The Russian geographer Nikolaĭ Khanykov questioned the latitudinal definition of Central Asia and preferred a physical one of all countries located in the region landlocked from water, including Afghanistan, Khorasan (Northeast Iran), Kyrgyzstan, Tajikistan, Turkmenistan, East Turkestan (Xinjiang), Mongolia, and Uzbekistan. Russian culture has two distinct terms: "Средняя Азия" ("Srednyaya Aziya" or "Middle Asia", the narrower definition, which includes only those traditionally non-Slavic, Central Asian lands that were incorporated within those borders of historical Russia) and "Центральная Азия" ("Tsentralnaya Aziya" or "Central Asia", the wider definition, which includes Central Asian lands that have never been part of historical Russia). The latter definition includes Afghanistan and 'East Turkestan'. The most limited definition was the official one of the Soviet Union, which defined Middle Asia as consisting solely of Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan, omitting Kazakhstan. Soon after the dissolution of the Soviet Union in 1991, the leaders of the four former Soviet Central Asian Republics met in Tashkent and declared that the definition of Central Asia should include Kazakhstan as well as the original four included by the Soviets. Since then, this has become the most common definition of Central Asia. In 1978, UNESCO defined the region as "Afghanistan, north-eastern Iran, Pakistan, northern India, western China, Mongolia and the Soviet Central Asian Republics". An alternative method is to define the region based on ethnicity, and in particular, areas populated by Eastern Turkic, Eastern Iranian, or Mongolian peoples. These areas include Xinjiang Uyghur Autonomous Region, the Turkic regions of southern Siberia, the five republics, and Afghan Turkestan. Afghanistan as a whole, the northern and western areas of Pakistan and the Kashmir Valley of India may also be included. The Tibetans and Ladakhis are also included. Most of the mentioned peoples are considered the "indigenous" peoples of the vast region. Central Asia is sometimes referred to as Turkestan. Geography. Central Asia is a region of varied geography, including high passes and mountains (Tian Shan), vast deserts (Kyzyl Kum, Taklamakan), and especially treeless, grassy steppes. The vast steppe areas of Central Asia are considered together with the steppes of Eastern Europe as a homogeneous geographical zone known as the Eurasian Steppe. Much of the land of Central Asia is too dry or too rugged for farming. The Gobi Desert extends from the foot of the Pamirs, 77°E, to the Great Khingan (Da Hinggan) Mountains, 116°–118°E. Central Asia has the following geographic extremes: A majority of the people earn a living by herding livestock. Industrial activity centers in the region's cities. Major rivers of the region include the Amu Darya, the Syr Darya, Irtysh, the Hari River and the Murghab River. Major bodies of water include the Aral Sea and Lake Balkhash, both of which are part of the huge west-central Asian endorheic basin that also includes the Caspian Sea. Both of these bodies of water have shrunk significantly in recent decades due to the diversion of water from rivers that feed them for irrigation and industrial purposes. Water is an extremely valuable resource in arid Central Asia and can lead to rather significant international disputes. Historical regions. Central Asia is bounded on the north by the forests of Siberia. The northern half of Central Asia (Kazakhstan) is the middle part of the Eurasian steppe. Westward the Kazakh steppe merges into the Russian-Ukrainian steppe and eastward into the steppes and deserts of Dzungaria and Mongolia. Southward the land becomes increasingly dry and the nomadic population increasingly thin. The south supports areas of dense population and cities wherever irrigation is possible. The main irrigated areas are along the eastern mountains, along the Oxus and Jaxartes Rivers and along the north flank of the Kopet Dagh near the Persian border. East of the Kopet Dagh is the important oasis of Merv and then a few places in Afghanistan like Herat and Balkh. Two projections of the Tian Shan create three "bays" along the eastern mountains. The largest, in the north, is eastern Kazakhstan, traditionally called Jetysu or Semirechye which contains Lake Balkhash. In the center is the small but densely populated Ferghana valley. In the south is Bactria, later called Tocharistan, which is bounded on the south by the Hindu Kush mountains of Afghanistan. The Syr Darya (Jaxartes) rises in the Ferghana valley and the Amu Darya (Oxus) rises in Bactria. Both flow northwest into the Aral Sea. Where the Oxus meets the Aral Sea it forms a large delta called Khwarazm and later the Khanate of Khiva. North of the Oxus is the less-famous but equally important Zarafshan River which waters the great trading cities of Bokhara and Samarkand. The other great commercial city was Tashkent northwest of the mouth of the Ferghana valley. The land immediately north of the Oxus was called Transoxiana and also Sogdia, especially when referring to the Sogdian merchants who dominated the silk road trade. To the east, Dzungaria and the Tarim Basin were united into the Manchu-Chinese province of Xinjiang (Sinkiang; Hsin-kiang) about 1759. Caravans from China usually went along the north or south side of the Tarim basin and joined at Kashgar before crossing the mountains northwest to Ferghana or southwest to Bactria. A minor branch of the silk road went north of the Tian Shan through Dzungaria and Zhetysu before turning southwest near Tashkent. Nomadic migrations usually moved from Mongolia through Dzungaria before turning southwest to conquer the settled lands or continuing west toward Europe. The Kyzyl Kum Desert or semi-desert is between the Oxus and Jaxartes, and the Karakum Desert is between the Oxus and Kopet Dagh in Turkmenistan. Khorasan meant approximately northeast Persia and northern Afghanistan. Margiana was the region around Merv. The Ustyurt Plateau is between the Aral and Caspian Seas. To the southwest, across the Kopet Dagh, lies Persia. From here Persian and Islamic civilisation penetrated Central Asia and dominated its high culture until the Russian conquest. In the southeast is the route to India. In early times Buddhism spread north and throughout much of history warrior kings and tribes would move southeast to establish their rule in northern India. Most nomadic conquerors entered from the northeast. After 1800 western civilisation in its Russian and Soviet form penetrated from the northwest. Climate. Because Central Asia is landlocked and not buffered by a large body of water, temperature fluctuations are often severe, excluding the hot, sunny summer months. In most areas, the climate is dry and continental, with hot summers and cool to cold winters, with occasional snowfall. Outside high-elevation areas, the climate is mostly semi-arid to arid. In lower elevations, summers are hot with blazing sunshine. Winters feature occasional rain or snow from low-pressure systems that cross the area from the Mediterranean Sea. Average monthly precipitation is very low from July to September, rises in autumn (October and November) and is highest in March or April, followed by swift drying in May and June. Winds can be strong, producing dust storms sometimes, especially toward the end of the summer in September and October. Specific cities that exemplify Central Asian climate patterns include Tashkent and Samarkand, Uzbekistan, Ashgabat, Turkmenistan, and Dushanbe, Tajikistan. The last of these represents one of the wettest climates in Central Asia, with an average annual precipitation of over . Biogeographically, Central Asia is part of the Palearctic realm. The largest biome in Central Asia is the temperate grasslands, savannas, and shrublands biome. Central Asia also contains the montane grasslands and shrublands, deserts and xeric shrublands and temperate coniferous forests biomes. Climate change. As of 2022, there has been a scarcity of research on climate impacts in Central Asia, even though it experiences faster warming than the global average and is generally considered to be one of the more climate-vulnerable regions in the world. Along with West Asia, it has already had greater increases in hot temperature extremes than the other parts of Asia, Rainfall in Central Asia had decreased, unlike elsewhere in Asia, and the frequency and intensity of dust storms had grown (partly due to poor land use practices). Droughts have already become more likely, and their likelihood is expected to continue increasing with greater climate change. By 2050, people in the Amu Darya basin may be faced with severe water scarcity due to both climate and socioeconomic reasons. History. Although, during the golden age of Orientalism the place of Central Asia in the world history was marginalised, contemporary historiography has rediscovered the "centrality" of the Central Asia. The history of Central Asia is defined by the area's climate and geography. The aridness of the region made agriculture difficult, and its distance from the sea cut it off from much trade. Thus, few major cities developed in the region; instead, the area was for millennia dominated by the nomadic horse peoples of the steppe. Relations between the steppe nomads and the settled people in and around Central Asia were long marked by conflict. The nomadic lifestyle was well suited to warfare, and the steppe horse riders became some of the most militarily potent people in the world, limited only by their lack of internal unity. Any internal unity that was achieved was most probably due to the influence of the Silk Road, which traveled along Central Asia. Periodically, great leaders or changing conditions would organise several tribes into one force and create an almost unstoppable power. These included the Hun invasion of Europe, the Five Barbarians rebellions in China and most notably the Mongol conquest of much of Eurasia. During pre-Islamic and early Islamic times, Central Asia was inhabited predominantly by speakers of Iranian languages. Among the ancient sedentary Iranian peoples, the Sogdians and Chorasmians played an important role, while Iranian peoples such as Scythians and the later on Alans lived a nomadic or semi-nomadic lifestyle. The main migration of Turkic peoples occurred between the 6th and 11th centuries, when they spread across most of Central Asia. The Eurasian Steppe slowly transitioned from Indo European and Iranian-speaking groups with dominant West-Eurasian ancestry to a more heterogeneous region with increasing East Asian ancestry through Turkic and Mongolian groups in the past thousands years, including extensive Turkic and later Mongol migrations out of Mongolia and slow assimilation of local populations. In the 8th century AD, the Islamic expansion reached the region but had no significant demographic impact. In the 13th century AD, the Mongolian invasion of Central Asia brought most of the region under Mongolian influence, which had "enormous demographic success", but did not impact the cultural or linguistic landscape. Invasion routes through Central Asia. Once populated by Iranian tribes and other Indo-European speaking people, Central Asia experienced numerous invasions emanating out of Southern Siberia and Mongolia that would drastically affect the region. Genetic data shows that the different Central Asian Turkic-speaking peoples have between ~22% and ~70% East Asian ancestry (represented by "Baikal hunter-gatherer ancestry" shared with other Northeast Asians and Eastern Siberians), in contrast to Iranian-speaking Central Asians, specifically Tajiks, which display genetic continuity to Indo-Iranians of the Iron Age. Certain Turkic ethnic groups, specifically the Kazakhs, display even higher East Asian ancestry. This is explained by substantial Mongolian influence on the Kazakh genome, through significant admixture between blue eyes, blonde hair, the medieval Kipchaks of Central Asia and the invading medieval Mongolians. The data suggests that the Mongol invasion of Central Asia had lasting impacts onto the genetic makeup of Kazakhs. According to recent genetic genealogy testing, the genetic admixture of the Uzbeks clusters somewhere between the Iranian peoples and the Mongols. Another study shows that the Uzbeks are closely related to other Turkic peoples of Central Asia and rather distant from Iranian people. The study also analysed the maternal and paternal DNA haplogroups and shows that Turkic speaking groups are more homogenous than Iranian speaking groups. Genetic studies analyzing the full genome of Uzbeks and other Central Asian populations found that about ~27-60% of the Uzbek ancestry is derived from East Asian sources, with the remainder ancestry (~40–73%) being made up by European and Middle Eastern components. According to a recent study, the Kyrgyz, Kazakhs, Uzbeks, and Turkmens share more of their gene pool with various East Asian and Siberian populations than with West Asian or European populations, though the Turkmens have a large percentage from populations to the east, their main components are Central Asian. The study further suggests that both migration and linguistic assimilation helped to spread the Turkic languages in Eurasia. Medieval to modern history. The Tang dynasty of China expanded westwards and controlled large parts of Central Asia, directly and indirectly through their Turkic vassals. Tang China actively supported the Turkification of Central Asia, while extending its cultural influence. The Tang Chinese were defeated by the Abbasid Caliphate at the Battle of Talas in 751, marking the end of the Tang dynasty's western expansion and the 150 years of Chinese influence. The Tibetan Empire would take the chance to rule portions of Central Asia and South Asia. During the 13th and 14th centuries, the Mongols conquered and ruled the largest contiguous empire in recorded history. Most of Central Asia fell under the control of the Chagatai Khanate. The dominance of the nomads ended in the 16th century, as firearms allowed settled peoples to gain control of the region. Russia, China, and other powers expanded into the region and had captured the bulk of Central Asia by the end of the 19th century. The Qing dynasty gained control of East Turkestan in the 18th century as a result of a long struggle with the Dzungars. The Russian Empire conquered the lands of the nomadic Kazakhs, Turkmens, Kyrgyz and Central Asian khanates in the 19th century. A major revolt known as the Dungan Revolt occurred in the 1860s and 1870s in the eastern part of Central Asia, and Qing rule almost collapsed in all of East Turkestan. After the Russian Revolution, the western Central Asian regions were incorporated into the Soviet Union. The eastern part of Central Asia, known as Xinjiang, was incorporated into the People's Republic of China, having been previously ruled by the Qing dynasty and the Republic of China. Mongolia gained its independence from China and has remained independent but became a Soviet satellite state until the dissolution of the Soviet Union. Afghanistan remained relatively independent of major influence by the Soviet Union until the Saur Revolution of 1978. The Soviet areas of Central Asia saw much industrialisation and construction of infrastructure, but also the suppression of local cultures, hundreds of thousands of deaths from failed collectivisation programmes, and a lasting legacy of ethnic tensions and environmental problems. Soviet authorities deported millions of people, including entire nationalities, from western areas of the Soviet Union to Central Asia and Siberia. According to Touraj Atabaki and Sanjyot Mehendale, "From 1959 to 1970, about two million people from various parts of the Soviet Union migrated to Central Asia, of which about one million moved to Kazakhstan." After the collapse of the Soviet Union. With the collapse of the Soviet Union, five countries gained independence, that is, Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. The historian and Turkologist Peter B. Golden explains that without the imperial manipulations of the Russian Empire but above all the Soviet Union, the creation of said republics would have been impossible. In nearly all the new states, former Communist Party officials retained power as local strongmen. None of the new republics could be considered functional democracies in the early days of independence, although in recent years Kyrgyzstan, Kazakhstan and Mongolia have made further progress towards more open societies, unlike Uzbekistan, Tajikistan, and Turkmenistan, which have maintained many Soviet-style repressive tactics. Beginning in the early 2000s, the Chinese government engaged in a series of human rights abuses against Uyghurs and other ethnic and religious minorities in Xinjiang. Culture. Arts. At the crossroads of Asia, shamanistic practices live alongside Buddhism. Thus, Yama, Lord of Death, was revered in Tibet as a spiritual guardian and judge. Mongolian Buddhism, in particular, was influenced by Tibetan Buddhism. The Qianlong Emperor of Qing China in the 18th century was Tibetan Buddhist and would sometimes travel from Beijing to other cities for personal religious worship. Central Asia also has an indigenous form of improvisational oral poetry that is over 1000 years old. It is principally practiced in Kyrgyzstan and Kazakhstan by "akyns", lyrical improvisationalists. They engage in lyrical battles, the "aytysh" or the "alym sabak". The tradition arose out of early bardic oral historians. They are usually accompanied by a stringed instrument—in Kyrgyzstan, a three-stringed komuz, and in Kazakhstan, a similar two-stringed instrument, the dombra. Photography in Central Asia began to develop after 1882, when a Russian Mennonite photographer named Wilhelm Penner moved to the Khanate of Khiva during the Mennonite migration to Central Asia led by Claas Epp, Jr. Upon his arrival to Khanate of Khiva, Penner shared his photography skills with a local student Khudaybergen Divanov, who later became the founder of Uzbek photography. Some also learn to sing the "Manas", Kyrgyzstan's epic poem (those who learn the "Manas" exclusively but do not improvise are called "manaschis"). During Soviet rule, "akyn" performance was co-opted by the authorities and subsequently declined in popularity. With the fall of the Soviet Union, it has enjoyed a resurgence, although "akyns" still do use their art to campaign for political candidates. A 2005 "The Washington Post" article proposed a similarity between the improvisational art of "akyns" and modern freestyle rap performed in the West. As a consequence of Russian colonisation, European fine arts – painting, sculpture and graphics – have developed in Central Asia. The first years of the Soviet regime saw the appearance of modernism, which took inspiration from the Russian avant-garde movement. Until the 1980s, Central Asian arts had developed along with general tendencies of Soviet arts. In the 90s, arts of the region underwent some significant changes. Institutionally speaking, some fields of arts were regulated by the birth of the art market, some stayed as representatives of official views, while many were sponsored by international organisations. The years of 1990–2000 were times for the establishment of contemporary arts. In the region, many important international exhibitions are taking place, Central Asian art is represented in European and American museums, and the Central Asian Pavilion at the Venice Biennale has been organised since 2005. Sports. Equestrian sports are traditional in Central Asia, with disciplines like endurance riding, buzkashi, dzhigit and kyz kuu. The traditional game of Buzkashi is played throughout the Central Asian region, the countries sometimes organise Buzkashi competition amongst each other. The First regional competition among the Central Asian countries, Russia, Chinese Xinjiang and Turkey was held in 2013. The first world title competition was played in 2017 and won by Kazakhstan. Association football is popular across Central Asia. Most countries are members of the Central Asian Football Association, a region of the Asian Football Confederation. However, Kazakhstan is a member of the UEFA. Wrestling is popular across Central Asia, with Kazakhstan having claimed 14 Olympic medals, Uzbekistan seven, and Kyrgyzstan three. As former Soviet states, Central Asian countries have been successful in gymnastics. Mixed Martial Arts is one of more common sports in Central Asia, Kyrgyz athlete Valentina Shevchenko holding the UFC Flyweight Champion title. Cricket is the most popular sport in Afghanistan. The Afghanistan national cricket team, first formed in 2001, has claimed wins over Bangladesh, West Indies and Zimbabwe. Notable Kazakh competitors include cyclists Alexander Vinokourov and Andrey Kashechkin, boxer Vassiliy Jirov and Gennady Golovkin, runner Olga Shishigina, decathlete Dmitriy Karpov, gymnast Aliya Yussupova, judoka Askhat Zhitkeyev and Maxim Rakov, skier Vladimir Smirnov, weightlifter Ilya Ilyin, and figure skaters Denis Ten and Elizabet Tursynbaeva. Notable Uzbekistani competitors include cyclist Djamolidine Abdoujaparov, boxer Ruslan Chagaev, canoer Michael Kolganov, gymnast Oksana Chusovitina, tennis player Denis Istomin, chess player Rustam Kasimdzhanov, and figure skater Misha Ge. Economy. Since gaining independence in the early 1990s, the Central Asian republics have gradually been moving from a state-controlled economy to a market economy. However, reform has been deliberately gradual and selective, as governments strive to limit the social cost and ameliorate living standards. All five countries are implementing structural reforms to improve competitiveness. Kazakhstan is the only CIS country to be included in the 2020 and 2019 IWB World Competitiveness rankings. In particular, they have been modernizing the industrial sector and fostering the development of service industries through business-friendly fiscal policies and other measures, to reduce the share of agriculture in GDP. Between 2005 and 2013, the share of agriculture dropped in all but Tajikistan, where it increased while industry decreased. The fastest growth in industry was observed in Turkmenistan, whereas the services sector progressed most in the other four countries. Public policies pursued by Central Asian governments focus on buffering the political and economic spheres from external shocks. This includes maintaining a trade balance, minimizing public debt and accumulating national reserves. They cannot totally insulate themselves from negative exterior forces, however, such as the persistently weak recovery of global industrial production and international trade since 2008. Notwithstanding this, they have emerged relatively unscathed from the 2008 financial crisis. Growth faltered only briefly in Kazakhstan, Tajikistan and Turkmenistan and not at all in Uzbekistan, where the economy grew by more than 7% per year on average between 2008 and 2013. Turkmenistan achieved unusually high 14.7% growth in 2011. Kyrgyzstan's performance has been more erratic but this phenomenon was visible well before 2008. The republics which have fared best benefitted from the commodities boom during the first decade of the 2000s. Kazakhstan and Turkmenistan have abundant oil and natural gas reserves and Uzbekistan's own reserves make it more or less self-sufficient. Kyrgyzstan, Tajikistan and Uzbekistan all have gold reserves and Kazakhstan has the world's largest uranium reserves. Fluctuating global demand for cotton, aluminium and other metals (except gold) in recent years has hit Tajikistan hardest, since aluminium and raw cotton are its chief exports − the Tajik Aluminium Company is the country's primary industrial asset. In January 2014, the Minister of Agriculture announced the government's intention to reduce the acreage of land cultivated by cotton to make way for other crops. Uzbekistan and Turkmenistan are major cotton exporters themselves, ranking fifth and ninth respectively worldwide for volume in 2014. Although both exports and imports have grown significantly over the past decade, Central Asian republics countries remain vulnerable to economic shocks, owing to their reliance on exports of raw materials, a restricted circle of trading partners and a negligible manufacturing capacity. Kyrgyzstan has the added disadvantage of being considered resource poor, although it does have ample water. Most of its electricity is generated by hydropower. The Kyrgyz economy was shaken by a series of shocks between 2010 and 2012. In April 2010, President Kurmanbek Bakiyev was deposed by a popular uprising, with former minister of foreign affairs Roza Otunbayeva assuring the interim presidency until the election of Almazbek Atambayev in November 2011. Food prices rose two years in a row and, in 2012, production at the major Kumtor gold mine fell by 60% after the site was perturbed by geological movements. According to the World Bank, 33.7% of the population was living in absolute poverty in 2010 and 36.8% a year later. Despite high rates of economic growth in recent years, GDP per capita in Central Asia was higher than the average for developing countries only in Kazakhstan in 2013 (PPP$23,206) and Turkmenistan (PPP$14 201). It dropped to PPP$5,167 for Uzbekistan, home to 45% of the region's population, and was even lower for Kyrgyzstan and Tajikistan. Kazakhstan leads the Central Asian region in terms of foreign direct investments. The Kazakh economy accounts for more than 70% of all the investment attracted in Central Asia. In terms of the economic influence of big powers, China is viewed as one of the key economic players in Central Asia, especially after Beijing launched its grand development strategy known as the Belt and Road Initiative (BRI) in 2013. The Central Asian countries attracted $378.2billion of foreign direct investment (FDI) between 2007 and 2019. Kazakhstan accounted for 77.7% of the total FDI directed to the region. Kazakhstan is also the largest country in Central Asia accounting for more than 60 percent of the region's gross domestic product (GDP). Central Asian nations fared better economically throughout the COVID-19 pandemic. Many variables are likely to have been at play, but disparities in economic structure, the intensity of the pandemic, and accompanying containment efforts may all be linked to part of the variety in nations' experiences. Central Asian countries are, however, predicted to be hit the worst in the future. Only 4% of permanently closed businesses anticipate to return in the future, with huge differences across sectors, ranging from 3% in lodging and food services to 27% in retail commerce. In 2022, experts assessed that global climate change is likely to pose multiple economic risks to Central Asia and may possibly result in many billions of losses unless proper adaptation measures are developed to counter growing temperatures across the region. Education, science and technology. Modernisation of research infrastructure. Bolstered by strong economic growth in all but Kyrgyzstan, national development strategies are fostering new high-tech industries, pooling resources and orienting the economy towards export markets. Many national research institutions established during the Soviet era have since become obsolete with the development of new technologies and changing national priorities. This has led countries to reduce the number of national research institutions since 2009 by grouping existing institutions to create research hubs. Several of the Turkmen Academy of Sciences's institutes were merged in 2014: the Institute of Botany was merged with the Institute of Medicinal Plants to become the Institute of Biology and Medicinal Plants; the Sun Institute was merged with the Institute of Physics and Mathematics to become the Institute of Solar Energy; and the Institute of Seismology merged with the State Service for Seismology to become the Institute of Seismology and Atmospheric Physics. In Uzbekistan, more than 10 institutions of the Academy of Sciences have been reorganised, following the issuance of a decree by the Cabinet of Ministers in February 2012. The aim is to orient academic research towards problem-solving and ensure continuity between basic and applied research. For example, the Mathematics and Information Technology Research Institute has been subsumed under the National University of Uzbekistan and the Institute for Comprehensive Research on Regional Problems of Samarkand has been transformed into a problem-solving laboratory on environmental issues within Samarkand State University. Other research institutions have remained attached to the Uzbek Academy of Sciences, such as the Centre of Genomics and Bioinformatics. Kazakhstan and Turkmenistan are also building technology parks as part of their drive to modernise infrastructure. In 2011, construction began of a technopark in the village of Bikrova near Ashgabat, the Turkmen capital. It will combine research, education, industrial facilities, business incubators and exhibition centres. The technopark will house research on alternative energy sources (sun, wind) and the assimilation of nanotechnologies. Between 2010 and 2012, technological parks were set up in the east, south and north Kazakhstan oblasts (administrative units) and in the capital, Astana. A Centre for Metallurgy was also established in the east Kazakhstan oblast, as well as a Centre for Oil and Gas Technologies which will be part of the planned Caspian Energy Hub. In addition, the Centre for Technology Commercialisation has been set up in Kazakhstan as part of the Parasat National Scientific and Technological Holding, a joint stock company established in 2008 that is 100% state-owned. The centre supports research projects in technology marketing, intellectual property protection, technology licensing contracts and start-ups. The centre plans to conduct a technology audit in Kazakhstan and to review the legal framework regulating the commercialisation of research results and technology. Countries are seeking to augment the efficiency of traditional extractive sectors but also to make greater use of information and communication technologies and other modern technologies, such as solar energy, to develop the business sector, education and research. In March 2013, two research institutes were created by presidential decree to foster the development of alternative energy sources in Uzbekistan, with funding from the Asian Development Bank and other institutions: the SPU Physical−Technical Institute (Physics Sun Institute) and the International Solar Energy Institute. Three universities have been set up since 2011 to foster competence in strategic economic areas: Nazarbayev University in Kazakhstan (first intake in 2011), an international research university, Inha University in Uzbekistan (first intake in 2014), specializing in information and communication technologies, and the International Oil and Gas University in Turkmenistan (founded in 2013). Kazakhstan and Uzbekistan are both generalizing the teaching of foreign languages at school, in order to facilitate international ties. Kazakhstan and Uzbekistan have both adopted the three-tier bachelor's, master's and PhD degree system, in 2007 and 2012 respectively, which is gradually replacing the Soviet system of Candidates and Doctors of Science. In 2010, Kazakhstan became the only Central Asian member of the Bologna Process, which seeks to harmonise higher education systems in order to create a European Higher Education Area. Financial investment in research. The Central Asian republics' ambition of developing the business sector, education and research is being hampered by chronic low investment in research and development. Over the decade to 2013, the region's investment in research and development hovered around 0.2–0.3% of GDP. Uzbekistan broke with this trend in 2013 by raising its own research intensity to 0.41% of GDP. Kazakhstan is the only country where the business enterprise and private non-profit sectors make any significant contribution to research and development – but research intensity overall is low in Kazakhstan: just 0.18% of GDP in 2013. Moreover, few industrial enterprises conduct research in Kazakhstan. Only one in eight (12.5%) of the country's manufacturing firms were active in innovation in 2012, according to a survey by the UNESCO Institute for Statistics. Enterprises prefer to purchase technological solutions that are already embodied in imported machinery and equipment. Just 4% of firms purchase the license and patents that come with this technology. Nevertheless, there appears to be a growing demand for the products of research, since enterprises spent 4.5 times more on scientific and technological services in 2008 than in 1997. Trends in researchers. Kazakhstan and Uzbekistan count the highest researcher density in Central Asia. The number of researchers per million population is close to the world average (1,083 in 2013) in Kazakhstan (1,046) and higher than the world average in Uzbekistan (1,097). Kazakhstan is the only Central Asian country where the business enterprise and private non-profit sectors make any significant contribution to research and development. Uzbekistan is in a particularly vulnerable position, with its heavy reliance on higher education: three-quarters of researchers were employed by the university sector in 2013 and just 6% in the business enterprise sector. With most Uzbek university researchers nearing retirement, this imbalance imperils Uzbekistan's research future. Almost all holders of a Candidate of Science, Doctor of Science or PhD are more than 40 years old and half are aged over 60; more than one in three researchers (38.4%) holds a PhD degree, or its equivalent, the remainder holding a bachelor's or master's degree. Kazakhstan, Kyrgyzstan and Uzbekistan have all maintained a share of women researchers above 40% since the fall of the Soviet Union. Kazakhstan has even achieved gender parity, with Kazakh women dominating medical and health research and representing some 45–55% of engineering and technology researchers in 2013. In Tajikistan, however, only one in three scientists (34%) was a woman in 2013, down from 40% in 2002. Although policies are in place to give Tajik women equal rights and opportunities, these are underfunded and poorly understood. Turkmenistan has offered a state guarantee of equality for women since a law adopted in 2007 but the lack of available data makes it impossible to draw any conclusions as to the law's impact on research. As for Turkmenistan, it does not make data available on higher education, research expenditure or researchers. Source: "UNESCO Science Report: towards 2030" (2015), Table 14.1 Note: PhD graduates in science cover life sciences, physical sciences, mathematics and statistics, and computing; PhDs in engineering also cover manufacturing and construction. For Central Asia, the generic term of PhD also encompasses Candidate of Science and Doctor of Science degrees. Data are unavailable for Turkmenistan. Source: "UNESCO Science Report: towards 2030" (2015), Table 14.1 Research output. The number of scientific papers published in Central Asia grew by almost 50% between 2005 and 2014, driven by Kazakhstan, which overtook Uzbekistan over this period to become the region's most prolific scientific publisher, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Between 2005 and 2014, Kazakhstan's share of scientific papers from the region grew from 35% to 56%. Although two-thirds of papers from the region have a foreign co-author, the main partners tend to come from beyond Central Asia, namely the Russian Federation, USA, German, United Kingdom and Japan. Five Kazakh patents were registered at the US Patent and Trademark Office between 2008 and 2013, compared to three for Uzbek inventors and none at all for the other three Central Asian republics, Kyrgyzstan, Tajikistan and Turkmenistan. Kazakhstan is Central Asia's main trader in high-tech products. Kazakh imports nearly doubled between 2008 and 2013, from US$2.7billion to US$5.1billion. There has been a surge in imports of computers, electronics and telecommunications; these products represented an investment of US$744million in 2008 and US$2.6billion five years later. The growth in exports was more gradual – from US$2.3billion to US$3.1billion – and dominated by chemical products (other than pharmaceuticals), which represented two-thirds of exports in 2008 (US$1.5billion) and 83% (US$2.6billion) in 2013. International cooperation. The five Central Asian republics belong to several international bodies, including the Organization for Security and Co-operation in Europe, the Economic Cooperation Organization and the Shanghai Cooperation Organisation. They are also members of the Central Asia Regional Economic Cooperation (CAREC) Programme, which also includes Afghanistan, Azerbaijan, China, Mongolia and Pakistan. In November 2011, the 10 member countries adopted the "CAREC 2020 Strategy", a blueprint for furthering regional co-operation. Over the decade to 2020, US$50billion is being invested in priority projects in transport, trade and energy to improve members' competitiveness. The landlocked Central Asian republics are conscious of the need to co-operate in order to maintain and develop their transport networks and energy, communication and irrigation systems. Only Kazakhstan, Azerbaijan, and Turkmenistan border the Caspian Sea and none of the republics has direct access to an ocean, complicating the transportation of hydrocarbons, in particular, to world markets. Kazakhstan is also one of the three founding members of the Eurasian Economic Union in 2014, along with Belarus and the Russian Federation. Armenia and Kyrgyzstan have since joined this body. As co-operation among the member states in science and technology is already considerable and well-codified in legal texts, the Eurasian Economic Union is expected to have a limited additional impact on co-operation among public laboratories or academia but it should encourage business ties and scientific mobility, since it includes provision for the free circulation of labour and unified patent regulations. Kazakhstan and Tajikistan participated in the Innovative Biotechnologies Programme (2011–2015) launched by the Eurasian Economic Community, the predecessor of the Eurasian Economic Union, The programme also involved Belarus and the Russian Federation. Within this programme, prizes were awarded at an annual bio-industry exhibition and conference. In 2012, 86 Russian organisations participated, plus three from Belarus, one from Kazakhstan and three from Tajikistan, as well as two scientific research groups from Germany. At the time, Vladimir Debabov, scientific director of the Genetika State Research Institute for Genetics and the Selection of Industrial Micro-organisms in the Russian Federation, stressed the paramount importance of developing bio-industry. "In the world today, there is a strong tendency to switch from petrochemicals to renewable biological sources", he said. "Biotechnology is developing two to three times faster than chemicals." Kazakhstan also participated in a second project of the Eurasian Economic Community, the establishment of the Centre for Innovative Technologies on 4 April 2013, with the signing of an agreement between the Russian Venture Company (a government fund of funds), the Kazakh JSC National Agency and the Belarusian Innovative Foundation. Each of the selected projects is entitled to funding of US$3–90million and is implemented within a public–private partnership. The first few approved projects focused on supercomputers, space technologies, medicine, petroleum recycling, nanotechnologies and the ecological use of natural resources. Once these initial projects have spawned viable commercial products, the venture company plans to reinvest the profits in new projects. This venture company is not a purely economic structure; it has also been designed to promote a common economic space among the three participating countries. Kazakhstan recognises the role civil society initiatives have to address the consequences of the COVID-19 crisis. Four of the five Central Asian republics have also been involved in a project launched by the European Union in September 2013, IncoNet CA. The aim of this project is to encourage Central Asian countries to participate in research projects within Horizon 2020, the European Union's eighth research and innovation funding programme. The focus of this research projects is on three societal challenges considered as being of mutual interest to both the European Union and Central Asia, namely: climate change, energy and health. IncoNet CA builds on the experience of earlier projects which involved other regions, such as Eastern Europe, the South Caucasus and the Western Balkans. IncoNet CA focuses on twinning research facilities in Central Asia and Europe. It involves a consortium of partner institutions from Austria, the Czech Republic, Estonia, Germany, Hungary, Kazakhstan, Kyrgyzstan, Poland, Portugal, Tajikistan, Turkey and Uzbekistan. In May 2014, the European Union launched a 24-month call for project applications from twinned institutions – universities, companies and research institutes – for funding of up to €10, 000 to enable them to visit one another's facilities to discuss project ideas or prepare joint events like workshops. The International Science and Technology Center (ISTC) was established in 1992 by the European Union, Japan, the Russian Federation and the US to engage weapons scientists in civilian research projects and to foster technology transfer. ISTC branches have been set up in the following countries party to the agreement: Armenia, Belarus, Georgia, Kazakhstan, Kyrgyzstan and Tajikistan. The headquarters of ISTC were moved to Nazarbayev University in Kazakhstan in June 2014, three years after the Russian Federation announced its withdrawal from the centre. Kyrgyzstan, Tajikistan and Kazakhstan have been members of the World Trade Organization since 1998, 2013 and 2015 respectively. Demographics. By a broad definition including Mongolia and Afghanistan, more than 90 million people live in Central Asia, about 2% of Asia's total population. Of the regions of Asia, only North Asia has fewer people. It has a population density of 9 people per km2, vastly less than the 80.5 people per km2 of the continent as a whole. Kazakhstan is one of the least densely populated countries in the world. Languages. Russian, as well as being spoken by around six million ethnic Russians and Ukrainians of Central Asia, is the de facto lingua franca throughout the former Soviet Central Asian Republics. Mandarin Chinese has an equally dominant presence in Inner Mongolia, Qinghai and Xinjiang. The languages of the majority of the inhabitants of the former Soviet Central Asian Republics belong to the Turkic language group. Turkmen is mainly spoken in Turkmenistan, and as a minority language in Afghanistan, Russia, Iran and Turkey. Kazakh and Kyrgyz are related languages of the Kypchak group of Turkic languages and are spoken throughout Kazakhstan, Kyrgyzstan, and as a minority language in Tajikistan, Afghanistan and Xinjiang. Uzbek and Uyghur are spoken in Uzbekistan, Tajikistan, Kyrgyzstan, Afghanistan and Xinjiang. Middle Iranian languages were once spoken throughout Central Asia, such as the once prominent Sogdian, Khwarezmian, Bactrian and Scythian, which are now extinct and belonged to the Eastern Iranian family. The Eastern Iranian Pashto language is still spoken in Afghanistan and northwestern Pakistan. Other minor Eastern Iranian languages such as Shughni, Munji, Ishkashimi, Sarikoli, Wakhi, Yaghnobi and Ossetic are also spoken at various places in Central Asia. Varieties of Persian are also spoken as a major language in the region, locally known as Dari (in Afghanistan), Tajik (in Tajikistan and Uzbekistan), and Bukhori (by the Bukharan Jews of Central Asia). Tocharian, another Indo-European language group, which was once predominant in oases on the northern edge of the Tarim Basin of Xinjiang, is now extinct. Other language groups include the Tibetic languages, spoken by around six million people across the Tibetan Plateau and into Qinghai, Sichuan (Szechwan), Ladakh and Baltistan, and the Nuristani languages of northeastern Afghanistan. Korean is spoken by the Koryo-saram minority, mainly in Kazakhstan and Uzbekistan. Religions. Islam is the religion most common in the Central Asian Republics, Afghanistan, Xinjiang, and the peripheral western regions, such as Bashkortostan. Most Central Asian Muslims are Sunni, although there are sizable Shia minorities in Afghanistan and Tajikistan. Buddhism and Zoroastrianism were the major faiths in Central Asia before the arrival of Islam. Zoroastrian influence is still felt today in such celebrations as Nowruz, held in all five of the Central Asian states. The transmission of Buddhism along the Silk Road eventually brought the religion to China. Amongst the Turkic peoples, Tengrism was the leading religion before Islam. Tibetan Buddhism is most common in Tibet, Mongolia, Ladakh, and the southern Russian regions of Siberia. The form of Christianity most practiced in the region in previous centuries was Nestorianism, but now the largest denomination is the Russian Orthodox Church, with many members in Kazakhstan, where about 25% of the population of 19 million identify as Christian, 17% in Uzbekistan and 5% in Kyrgyzstan. Pew Research Center estimates indicate that in 2010, around 6 million Christians lived in Central Asian countries, the Pew Forum study finds that Kazakhstan (4.1 million) has the largest Christian population in the region, followed by Uzbekistan (710,000), Kyrgyzstan (660,000), Turkmenistan (320,000) and Tajikistan (100,000). The Bukharan Jews were once a sizable community in Uzbekistan and Tajikistan, but nearly all have emigrated since the dissolution of the Soviet Union. In Siberia, shaministic practices persist, including forms of divination such as Kumalak. Contact and migration with Han people from China has brought Confucianism, Daoism, Mahayana Buddhism, and other Chinese folk beliefs into the region. Central Asia is where many integral beliefs and elements in various religious traditions of Judaism, Christianity, Islam, Buddhism. Geostrategy. Central Asia has long been a strategic location merely because of its proximity to several great powers on the Eurasian landmass. The region itself never held a dominant stationary population nor was able to make use of natural resources. Thus, it has rarely throughout history become the seat of power for an empire or influential state. Central Asia has been divided, redivided, conquered out of existence, and fragmented time and time again. Central Asia has served more as the battleground for outside powers than as a power in its own right. Central Asia had both the advantage and disadvantage of a central location between four historical seats of power. From its central location, it has access to trade routes to and from all the regional powers. On the other hand, it has been continuously vulnerable to attack from all sides throughout its history, resulting in political fragmentation or outright power vacuum, as it is successively dominated. In the post–Cold War era, Central Asia is an ethnic cauldron, prone to instability and conflicts, without a sense of national identity, but rather a mess of historical cultural influences, tribal and clan loyalties, and religious fervor. Projecting influence into the area is no longer just Russia, but also Turkey, Iran, China, Pakistan, India and the United States: Russian historian Lev Gumilev wrote that Xiongnu, Mongols (Mongol Empire, Zunghar Khanate) and Turkic peoples (First Turkic Khaganate, Uyghur Khaganate) played a role to stop Chinese aggression to the north. The Turkic Khaganate had special policy against Chinese assimilation policy. The region, along with Russia, is also part of "the great pivot" as per the Heartland Theory of Halford Mackinder, which says that the power which controls Central Asia—richly endowed with natural resources—shall ultimately be the "empire of the world". For example, the region is endowed with various mineral resources such as chromium, cobalt, zinc, copper, silver, lithium, lead, molybdenum and many others making it a potential major global supplier of critical materials for clean energy technologies. War on Terror. In the context of the United States' War on Terror, Central Asia has once again become the center of geostrategic calculations. Pakistan's status has been upgraded by the U.S. government to Major non-NATO ally because of its central role in serving as a staging point for the invasion of Afghanistan, providing intelligence on Al-Qaeda operations in the region, and leading the hunt on Osama bin Laden. Afghanistan, which had served as a haven and source of support for Al-Qaeda under the protection of Mullah Omar and the Taliban, was the target of a U.S. invasion in 2001 and ongoing reconstruction and drug-eradication efforts. U.S. military bases have also been established in Uzbekistan and Kyrgyzstan, causing both Russia and the People's Republic of China to voice their concern over a permanent U.S. military presence in the region. Western governments have accused Russia, China and the former Soviet republics of justifying the suppression of separatist movements, and the associated ethnics and religion with the War on Terror.
6745
46007279
https://en.wikipedia.org/wiki?curid=6745
Couscous
Couscous () is a traditional North African dish of small steamed granules of rolled semolina that is often served with a stew spooned on top. Pearl millet, sorghum, bulgur, and other cereals are sometimes cooked in a similar way in other regions, and the resulting dishes are also sometimes called couscous. Couscous is a staple food throughout the Maghrebi cuisines of Algeria, Tunisia, Mauritania, Morocco, and Libya. It was integrated into French and European cuisine at the beginning of the twentieth century, through the French colonial empire and the Pieds-Noirs of Algeria. In 2020, couscous was added to UNESCO's Intangible Cultural Heritage list. Etymology. The word "couscous" (alternately "cuscus" or "kuskus") was first noted in early 17th century French, from Arabic kuskus, from kaskasa 'to pound', and is probably of Berber origin. The term "seksu" is attested in various Berber dialects such as Kabyle and Rifain, while Saharan Berber dialects such as Touareg and Ghadames have a slightly different form, "keskesu." This widespread geographical dispersion of the term strongly suggests its local Berber origin, lending further support to its likely Berber roots as Algerian linguist Salem Chaker suggests. The Berber root *KS means "well formed, well rolled, rounded." Numerous names and pronunciations for couscous exist around the world. History. It is unclear when couscous originated. Food historian Lucie Bolens believes couscous originated millennia ago, during the reign of Masinissa in the ancient kingdom of Numidia in present-day Algeria. Traces of cooking vessels akin to couscoussiers have been found in graves from the 3rd century BC, from the time of the berber kings of Numidia, in the city of Tiaret, Algeria. Couscoussiers dating back to the 12th century were found in the ruins of Igiliz, located in the Sous valley of Morocco. According to food writer Charles Perry, couscous originated among the Berbers of Algeria and Morocco between the end of the 11th-century Zirid dynasty, modern-day Algeria, and the rise of the 13th-century Almohad Caliphate. The historian Hady Idris noted that couscous is attested to during the Hafsid dynasty, but not the Zirid dynasty. In the 12th century, Maghrebi cooks were preparing dishes of non-mushy grains by stirring flour with water to create light, round balls of couscous dough that could be steamed. The historian Maxime Rodinson found three recipes for couscous from the 13th century Arabic cookbook "Kitab al-Wusla ila al-Habib", written by an Ayyubid author, and the anonymous Arabic cooking book "Kitab al tabikh" and Ibn Razin al-Tujibi's "Fadalat al-khiwan" also contain recipes. Couscous is believed to have been spread among the inhabitants of the Iberian Peninsula by the Berber dynasties of the 13th century, though it is no longer found in traditional Spanish or Portuguese cuisine. In modern-day Trapani, Sicily, the dish is still made to the medieval recipe of Andalusian author Ibn Razin al-Tujibi. Ligurian families that moved from Tabarka to Sardinia brought the dish with them to Carloforte in the 18th century. Known in France since the 16th century, it was brought into French cuisine at the beginning of the 20th century via the French colonial empire and the Pieds-Noirs. Preparation. Couscous is traditionally made from semolina, the hardest part of the grain of durum wheat (the hardest of all forms of wheat), which resists the grinding of the millstone. The semolina is sprinkled with water and rolled with the hands to form small pellets, sprinkled with dry flour to keep them separate, and then sieved. Any pellets that are too small to be finished, granules of couscous fall through the sieve and are again rolled and sprinkled with dry semolina and rolled into pellets. This labor-intensive process continues until all the semolina has been formed into tiny couscous granules. In the traditional method of preparing couscous, groups of people come together to make large batches over several days, which are then dried in the sun and used for several months. Handmade couscous may need to be rehydrated as it is prepared; this is achieved by a process of moistening and steaming over stew until the couscous reaches the desired light and fluffy consistency. In some regions, couscous is made from farina or coarsely ground barley or pearl millet. In modern times, couscous production is largely mechanized, and the product is sold worldwide. This couscous can be sauteed before it is cooked in water or another liquid. Properly cooked couscous is light and fluffy, not gummy or gritty. Traditionally, North Africans use a food steamer (called a "taseksut" in the Berber language, a "kiskas" in Arabic or a "couscoussier" in French). The base is a tall metal pot shaped like an oil jar, where the meat and vegetables are cooked as a stew. On top of the base, a steamer sits where the couscous is cooked, absorbing the flavours from the stew. The steamer's lid has holes around its edge so steam can escape. It is also possible to use a pot with a steamer insert. If the holes are too big, the steamer can be lined with damp cheesecloth. The couscous that is sold in most Western grocery stores is usually pre-steamed and dried. It is typically prepared by adding 1.5 measures of boiling water or stock to each measure of couscous and then leaving it covered tightly for about five minutes. Pre-steamed couscous takes less time to prepare than regular couscous, most dried pasta, or dried grains (such as rice). Packaged sets of quick-preparation couscous and canned vegetables, and generally meat, are routinely sold in European grocery stores and supermarkets. Couscous is widely consumed in France, where it was introduced by Maghreb immigrants and voted the third most popular dish in a 2011 survey. Recognition. In December 2020, Algeria, Mauritania, Morocco, and Tunisia obtained official recognition for the knowledge, know-how, and practices pertaining to the production and consumption of couscous on the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO. The joint submission by the four countries was hailed as an "example of international cooperation." Local variations. Couscous proper is about 2 mm in diameter, but there also exists a larger variety (3 mm more) known as "berkoukes", as well as an ultra-fine version (around 1 mm). In Morocco, Algeria, Tunisia, and Libya, it is generally served with vegetables (carrots, potatoes, and turnips) cooked in a spicy or mild broth or stew, usually with some meat (generally, chicken, lamb, or mutton). Algeria. Algerian couscous is a traditional staple food in Algeria, and it plays an important role in Algerian culture and cuisine. It is commonly served with vegetables, meat, or fish. In Algeria, there are various types of couscous dishes. Egypt. In Egypt, couscous (, "") is traditionally prepared and consumed as a sweet dish, differing notably from the savory couscous dishes commonly associated with other North African cuisines. It is typically served for breakfast, as a light evening meal, or as a dessert. The preparation involves steaming or soaking the couscous with melted butter and hot water, after which it is topped with a variety of sweet ingredients. Common toppings include sugar (white, brown, or powdered), cinnamon, grated coconut, raisins, and assorted nuts such as almonds, walnuts, or hazelnuts. In some variations, sweetened condensed milk may also be used. Tunisia. In Tunisia, couscous is usually spicy, made with harissa sauce, and served commonly with vegetables and meat, including lamb, fish, seafood, beef, and sometimes (in southern regions) camel. Fish couscous is a Tunisian specialty and can also be made with octopus, squid or other seafood in a hot, red, spicy sauce. Couscous can also be served as a dessert. It is then called Masfuf. Masfuf can also contain raisins, grapes, or pomegranate seeds. Libya. In Libya, couscous is mostly served with lamb (but sometimes camel meat or, rarely, beef) in Tripoli and the western parts of Libya, but not during official ceremonies or weddings. Another way to eat couscous is as a dessert; it is prepared with dates, sesame, and pure honey and is locally referred to as "maghrood". Malta. In Malta, small round pasta slightly larger than typical couscous is known as "kusksu". It is commonly used in a dish of the same name, which includes broad beans (known in Maltese as "ful") and "ġbejniet", a local type of cheese. Mauritania. In Mauritania, the couscous uses large wheat grains ("mabroum") and is darker than the yellow couscous of Morocco. It is cooked with lamb, beef, or camel meat together with vegetables, primarily onion, tomato, and carrots, then mixed with a sauce and served with ghee, locally known as "dhen". Similar foods. Couscous is made from crushed wheat flour rolled into its constituent granules or pearls, making it distinct from pasta, even pasta such as orzo and risoni of similar size, which is made from ground wheat and either molded or extruded. Couscous and pasta have similar nutritional value, although pasta is usually more refined. Several dishes worldwide are also made from granules, like those of couscous rolled from flour from grains or other milled or grated starchy crops. Dishes with similar names. Israeli couscous is an extruded and toasted pasta and does not share main ingredients or method of production with couscous.
6746
1286404575
https://en.wikipedia.org/wiki?curid=6746
Constantius II
Constantius II (; ; 7 August 317 – 3 November 361) was Roman emperor from 337 to 361. His reign saw constant warfare on the borders against the Sasanian Empire and Germanic peoples, while internally the Roman Empire went through repeated civil wars, court intrigues, and usurpations. His religious policies inflamed domestic conflicts that would continue after his death. Constantius was a son of Constantine the Great, who elevated him to the imperial rank of "Caesar" on 8 November 324 and after whose death Constantius became "Augustus" together with his brothers, Constantine II and Constans on 9 September 337. He promptly oversaw the massacre of his father-in-law, an uncle, and several cousins, consolidating his hold on power. The brothers divided the empire among themselves, with Constantius receiving Greece, Thrace, the Asian provinces, and Egypt in the east. For the following decade a costly and inconclusive war against Persia took most of Constantius's time and attention. In the meantime, his brothers Constantine and Constans warred over the western provinces of the empire, leaving the former dead in 340 and the latter as sole ruler of the west. The two remaining brothers maintained an uneasy peace with each other until, in 350, Constans was overthrown and assassinated by the usurper Magnentius. Unwilling to accept Magnentius as co-ruler, Constantius waged a civil war against the usurper, defeating him at the battles of Mursa Major in 351 and Mons Seleucus in 353. Magnentius died by suicide after the latter battle, leaving Constantius as sole ruler of the empire. In 351, Constantius elevated his cousin Constantius Gallus to the subordinate rank of "Caesar" to rule in the east, but had him executed three years later after receiving scathing reports of his violent and corrupt nature. Shortly thereafter, in 355, Constantius promoted his last surviving cousin, Gallus's younger half-brother Julian, to the rank of "Caesar". As emperor, Constantius promoted Arianism, banned pagan sacrifices, and issued laws against Jews. His military campaigns against Germanic tribes were successful: he defeated the Alamanni in 354 and campaigned across the Danube against the Quadi and Sarmatians in 357. The war against the Sasanians, which had been in a lull since 350, erupted with renewed intensity in 359 and Constantius travelled to the east in 360 to restore stability after the loss of several border fortresses. However, Julian claimed the rank of "Augustus" in 360, leading to war between the two after Constantius's attempts to persuade Julian to back down failed. No battle was fought, as Constantius became ill and died of fever on 3 November 361 in Mopsuestia, allegedly naming Julian as his rightful successor before his death. Early life. Flavius Julius Constantius was born in 317 at Sirmium, Pannonia, now Serbia. He was the third son of Constantine the Great, and second by his second wife Fausta, the daughter of Maximian. Constantius was made "caesar" by his father on 8 November 324. In 336, religious unrest in Armenia and tense relations between Constantine and king Shapur II caused war to break out between Rome and Sassanid Persia. Though he made initial preparations for the war, Constantine fell ill and sent Constantius east to take command of the eastern frontier. Before Constantius arrived, the Persian general Narses, who was possibly the king's brother, overran Mesopotamia and captured Amida. Constantius promptly attacked Narses, and after suffering minor setbacks defeated and killed Narses at the Battle of Narasara. Constantius captured Amida and initiated a major refortification of the city, enhancing the city's circuit walls and constructing large towers. He also built a new stronghold in the hinterland nearby, naming it "Antinopolis". Augustus in the east. In early 337, Constantius hurried to Constantinople after receiving news that his father was near death. After Constantine died, Constantius buried him with lavish ceremony in the Church of the Holy Apostles. Soon after his father's death, the army massacred his relatives descended from the marriage of his paternal grandfather Constantius Chlorus to Flavia Maximiana Theodora, though the details are unclear. Two of Constantius's uncles (Julius Constantius and Flavius Dalmatius) and seven of his cousins were killed, including Hannibalianus and Dalmatius, rulers of Pontus and Moesia respectively, leaving Constantius, his two brothers Constantine II and Constans, and three cousins Gallus, Julian and Nepotianus as the only surviving male relatives of Constantine the Great. While the “official version” was that Constantius's relatives were merely the victims of a mutinous army, Ammianus Marcellinus, Zosimus, Libanius, Athanasius and Julian all blamed Constantius for the event. Burgess considered the latter version to be “consistent with all the evidence”, pointing to multiple factors that he believed lined up with the massacre being a planned attack rather than a spontaneous mutiny - the lack of high-profile punishments as a response, the sparing of all women, the attempted damnatio memoriae on the deceased, and the exile of the survivors Gallus and Julian. Soon after, Constantius met his brothers in Pannonia at Sirmium to formalize the partition of the empire. Constantius received the eastern provinces, including Constantinople, Thrace, Asia Minor, Syria, Egypt, and Cyrenaica; Constantine received Britannia, Gaul, Hispania, and Mauretania; and Constans, initially under the supervision of Constantine II, received Italy, Africa, Illyricum, Pannonia, Macedonia, and Achaea. Constantius then hurried east to Antioch to resume the war with Persia. While Constantius was away from the eastern frontier in early 337, King Shapur II assembled a large army, which included war elephants, and launched an attack on Roman territory, laying waste to Mesopotamia and putting the city of Nisibis under siege. Despite initial success, Shapur lifted his siege after his army missed an opportunity to exploit a collapsed wall. When Constantius learned of Shapur's withdrawal from Roman territory, he prepared his army for a counter-attack. Constantius repeatedly defended the eastern border against invasions by the Sassanid Empire under Shapur. These conflicts were mainly limited to Sassanid sieges of the major fortresses of Roman Mesopotamia, including Nisibis (Nusaybin), Singara, and Amida (Diyarbakir). Although Shapur seems to have been victorious in most of these confrontations, the Sassanids were able to achieve little. However, the Romans won a decisive victory at the Battle of Narasara, killing Shapur's brother, Narses. Ultimately, Constantius was able to push back the invasion, and Shapur failed to make any significant gains. Meanwhile, Constantine II desired to retain control of Constans's realm, leading the brothers into open conflict. Constantine was killed in 340 near Aquileia during an ambush. As a result, Constans took control of his deceased brother's realms and became sole ruler of the Western two-thirds of the empire. This division lasted until January 350, when Constans was assassinated by forces loyal to the usurper Magnentius. War against Magnentius. Constantius was determined to march west to fight the usurper. However, feeling that the east still required some sort of imperial presence, he elevated his cousin Constantius Gallus to "caesar" of the eastern provinces. As an extra measure to ensure the loyalty of his cousin, he married the elder of his two sisters, Constantina, to him. Before facing Magnentius, Constantius first came to terms with Vetranio, a loyal general in Illyricum who had recently been acclaimed emperor by his soldiers. Vetranio immediately sent letters to Constantius pledging his loyalty, which Constantius may have accepted simply in order to stop Magnentius from gaining more support. These events may have been spurred by the action of Constantina, who had since traveled east to marry Gallus. Constantius subsequently sent Vetranio the imperial diadem and acknowledged the general's new position as "augustus". However, when Constantius arrived, Vetranio willingly resigned his position and accepted Constantius's offer of a comfortable retirement in Bithynia. In 351, Constantius clashed with Magnentius in Pannonia with a large army. The ensuing Battle of Mursa Major was one of the largest and bloodiest battles ever between two Roman armies. The result was a victory for Constantius, but a costly one. Magnentius survived the battle and, determined to fight on, withdrew into northern Italy. Rather than pursuing his opponent, however, Constantius turned his attention to securing the Danubian border, where he spent the early months of 352 campaigning against the Sarmatians along the middle Danube. After achieving his aims, Constantius advanced on Magnentius in Italy. This action led the cities of Italy to switch their allegiance to him and eject the usurper's garrisons. Again, Magnentius withdrew, this time to southern Gaul. In 353, Constantius and Magnentius met for the final time at the Battle of Mons Seleucus in southern Gaul, and again Constantius emerged the victor. Magnentius, realizing the futility of continuing his position, committed suicide on 10 August 353. Solo reign. Constantius spent much of the rest of 353 and early 354 on campaign against the Alamanni on the Danube frontier. The campaign was successful and raiding by the Alamanni ceased temporarily. In the meantime, Constantius had been receiving disturbing reports regarding the actions of his cousin Gallus. Possibly as a result of these reports, Constantius concluded a peace with the Alamanni and traveled to Mediolanum (Milan). In Mediolanum, Constantius first summoned Ursicinus, Gallus's "magister equitum", for reasons that remain unclear. Constantius then summoned Gallus and Constantina. Although Gallus and Constantina complied with the order at first, when Constantina died in Bithynia, Gallus began to hesitate. However, after some convincing by one of Constantius's agents, Gallus continued his journey west, passing through Constantinople and Thrace to Poetovio (Ptuj) in Pannonia. In Poetovio, Gallus was arrested by the soldiers of Constantius under the command of Barbatio. Gallus was then moved to Pola and interrogated. Gallus claimed that it was Constantina who was to blame for all the trouble while he was in charge of the eastern provinces. This angered Constantius so greatly that he immediately ordered Gallus's execution. He soon changed his mind, however, and recanted the order. Unfortunately for Gallus, this second order was delayed by Eusebius, one of Constantius's eunuchs, and Gallus was executed. Religious issues. Paganism. Laws dating from the 350s prescribed the death penalty for those who performed or attended pagan sacrifices, and for the worshipping of idols. Pagan temples were shut down, and the Altar of Victory was removed from the Senate meeting house. There were also frequent episodes of ordinary Christians destroying, pillaging and desecrating many ancient pagan temples, tombs and monuments. Paganism was still popular among the population at the time. The emperor's policies were passively resisted by many governors and magistrates. In spite of this, Constantius never made any attempt to disband the various Roman priestly colleges or the Vestal Virgins. He never acted against the various pagan schools. At times, he actually made some effort to protect paganism. In fact, he even ordered the election of a priest for Africa. Also, he remained pontifex maximus and was deified by the Roman Senate after his death. His relative moderation toward paganism is reflected by the fact that it was over twenty years after his death, during the reign of Gratian, that any pagan senator protested his treatment of their religion. Christianity. Although often considered an Arian, Constantius ultimately preferred a third, compromise version that lay somewhere in between Arianism and the Nicene Creed, retrospectively called Semi-Arianism. During his reign he attempted to mold the Christian church to follow this compromise position, convening several Christian councils. "Unfortunately for his memory the theologians whose advice he took were ultimately discredited and the malcontents whom he pressed to conform emerged victorious," writes the historian A. H. M. Jones. "The great councils of 359–60 are therefore not reckoned ecumenical in the tradition of the church, and Constantius II is not remembered as a restorer of unity, but as a heretic who arbitrarily imposed his will on the church." According to the Greek historian Philostorgius (d. 439) in his "Ecclesiastical History", Constantius sent an Arian bishop known as Theophilus the Indian (also known as "Theophilus of Yemen") to Tharan Yuhanim, then the king of the South Arabian Himyarite Kingdom to convert the people to Christianity. According to the report, Theophilus succeeded in establishing three churches, one of them in the capital Zafar. Judaism. Judaism faced some severe restrictions under Constantius, who seems to have followed an anti-Jewish policy in line with that of his father. This included edicts to limit the ownership of slaves by Jewish people and banning marriages between Jews and Christian women. Later edicts sought to discourage conversions from Christianity to Judaism by confiscating the apostate's property. However, Constantius's actions in this regard may not have been so much to do with Jewish religion as with Jewish business—apparently, privately owned Jewish businesses were often in competition with state-owned businesses. As a result, Constantius may have sought to provide an advantage to state-owned businesses by limiting the skilled workers and slaves available to Jewish businesses. Further crises. On 11 August 355, the "magister militum" Claudius Silvanus revolted in Gaul. Silvanus had surrendered to Constantius after the Battle of Mursa Major. Constantius had made him "magister militum" in 353 with the purpose of blocking the German threats, a feat that Silvanus achieved by bribing the German tribes with the money he had collected. A plot organized by members of Constantius's court led the emperor to recall Silvanus. After Silvanus revolted, he received a letter from Constantius recalling him to Milan, but which made no reference to the revolt. Ursicinus, who was meant to replace Silvanus, bribed some troops, and Silvanus was killed. Constantius realised that too many threats still faced the Empire, however, and he could not possibly handle all of them by himself. So on 6 November 355, he elevated his last remaining male relative, Julian, to the rank of "caesar". A few days later, Julian was married to Helena, the last surviving sister of Constantius. Constantius soon sent Julian off to Gaul. Constantius spent the next few years overseeing affairs in the western part of the empire primarily from his base at Mediolanum. In April–May 357 he visited Rome for the only time in his life. The same year, he forced Sarmatian and Quadi invaders out of Pannonia and Moesia Inferior, then led a successful counter-attack across the Danube. In the winter of 357–58, Constantius received ambassadors from Shapur II who demanded that Rome restore the lands surrendered by Narseh. Despite rejecting these terms, Constantius tried to avert war with the Sassanid Empire by sending two embassies to Shapur II. Shapur II nevertheless launched another invasion of Roman Mesopotamia. In 360, when news reached Constantius that Shapur II had destroyed Singara (Sinjar), and taken Kiphas (Hasankeyf), Amida (Diyarbakır), and Ad Tigris (Cizre), he decided to travel east to face the re-emergent threat. Usurpation of Julian and crises in the east. In the meantime, Julian had won some victories against the Alamanni, who had once again invaded Roman Gaul. However, when Constantius requested reinforcements from Julian's army for the eastern campaign, the Gallic legions revolted and proclaimed Julian "augustus". On account of the immediate Sassanid threat, Constantius was unable to directly respond to his cousin's usurpation, other than by sending missives in which he tried to convince Julian to resign the title of "augustus" and be satisfied with that of "caesar". By 361, Constantius saw no alternative but to face the usurper with force, and yet the threat of the Sassanids remained. Constantius had already spent part of early 361 unsuccessfully attempting to re-take the fortress of Ad Tigris. After a time he had withdrawn to Antioch to regroup and prepare for a confrontation with Shapur II. The campaigns of the previous year had inflicted heavy losses on the Sassanids, however, and they did not attempt another round of campaigns that year. This temporary respite in hostilities allowed Constantius to turn his full attention to facing Julian. Death. Constantius immediately gathered his forces and set off west. However, by the time he reached Mopsuestia in Cilicia, it was clear that he was fatally ill and would not survive to face Julian. The sources claim that realising his death was near, Constantius had himself baptised by Euzoius, the Semi-Arian bishop of Antioch, and then declared that Julian was his rightful successor. Constantius II died of fever on 3 November 361. Like Constantine the Great, he was buried in the Church of the Holy Apostles, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the "De Ceremoniis". Marriages and children. Constantius II was married three times: First to a daughter of his half-uncle Julius Constantius, whose name is unknown. She was a full-sister of Gallus and a half-sister of Julian. She died c. 352/3. Second, to Eusebia, a woman of Macedonian origin, originally from the city of Thessalonica, whom Constantius married before his defeat of Magnentius in 353. She died before 361. Third and lastly, in 361, to Faustina, who gave birth to Constantius's only child, a posthumous daughter named Constantia, who later married Emperor Gratian. Family tree. Emperors are shown with a rounded-corner border with their dates as Augusti, names with a thicker border appear in both sections 1: Constantine's parents and half-siblings 2: Constantine's children Reputation. According to DiMaio and Frakes, “...Constantius is hard for the modern historian to fully understand both due to his own actions and due to the interests of the authors of primary sources for his reign.” A. H. M. Jones writes that he "appears in the pages of Ammianus as a conscientious emperor but a vain and stupid man, an easy prey to flatterers. He was timid and suspicious, and interested persons could easily play on his fears for their own advantage." However, Kent and M. and A. Hirmer suggest that the emperor "has suffered at the hands of unsympathetic authors, ecclesiastical and civil alike. To orthodox churchmen he was a bigoted supporter of the Arian heresy, to Julian the Apostate and the many who have subsequently taken his part he was a murderer, a tyrant and inept as a ruler". They go on to add, "Most contemporaries seem in fact to have held him in high esteem, and he certainly inspired loyalty in a way his brother could not". Eutropius wrote of him,He was a man of a remarkably tranquil disposition, good-natured, trusting too much to his friends and courtiers, and at last too much in the power of his wives. He conducted himself with great moderation in the commencement of his reign; he enriched his friends, and suffered none, whose active services he had experienced, to go unrewarded. He was however somewhat inclined to severity, whenever any suspicion of an attempt on the government was excited in him; otherwise he was gentle. His fortune is more to be praised in civil than in foreign wars.
6747
18872885
https://en.wikipedia.org/wiki?curid=6747
Constans
Flavius Julius Constans ( 323 – 350), also called Constans I, was Roman emperor from 337 to 350. He held the imperial rank of "caesar" from 333, and was the youngest son of Constantine the Great. After his father's death, he was made "augustus" alongside his brothers in September 337. Constans was given the administration of the praetorian prefectures of Italy, Illyricum, and Africa. He defeated the Sarmatians in a campaign shortly afterwards. Quarrels over the sharing of power led to a civil war with his eldest brother and co-emperor Constantine II, who invaded Italy in 340 and was killed in battle by Constans's forces near Aquileia. Constans gained from him the praetorian prefecture of Gaul. Thereafter there were tensions with his remaining brother and co-"augustus" Constantius II (), including over the exiled bishop Athanasius of Alexandria, who in turn eulogized Constans as "the most pious Augustus... of blessed and everlasting memory." In the following years he campaigned against the Franks, and in 343 he visited Roman Britain, the last legitimate emperor to do so until Manuel II in 1400, more than a thousand years later. In January 350, Magnentius () the commander of the Jovians and Herculians, a corps in the Roman army, was acclaimed "augustus" at Augustodunum (Autun) with the support of Marcellinus, the "comes rei privatae". Magnentius overthrew and killed Constans. Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. Early life. Sources variously report Constans' age at the time of his death as 27 or 30, meaning he was born in either 320 or 323. Timothy Barnes, observing numismatic evidence, considered the younger age to be more likely. He was the third and youngest son of Constantine I and Fausta. According to the works of both Ausonius and Libanius, he was educated at Constantinople under the tutelage of the poet Aemilius Magnus Arborius, who instructed him in Latin. On 25 December 333, Constans was elevated to the imperial rank of "caesar" at Constantinople by his father. Prior to 337, Constans became engaged to Olympias, the daughter of the praetorian prefect Ablabius, although the two never actually married. Reign. After Constantine's death, Constans and his two brothers, Constantine II and Constantius II were proclaimed "augusti" and divided the Roman empire among themselves on 9 September 337. Constans was left with Italy, Africa and Illyricum. In 338, he campaigned against the Sarmatians. Meanwhile, Constans came into conflict with his eldest brother Constantine II over the latter's presumed authority over Constans' territory. After attempting to issue legislation to Africa in 339, which was part of Constans' realm, Constantine led his army into an invasion of Italy only a year later. However, he was ambushed and killed by Constans' troops, and Constans then took control of his brother's territories. Constans began his reign in an energetic fashion. From 341 to 342, he led a campaign against the Franks where, after an initial setback, the military operation concluded with a victory and a favorable peace treaty. Eutropius wrote that he "had performed many gallant actions in the field, and had made himself feared by the army through the whole course of his life, though without exercising any extraordinary severity," while Ammianus Marcellinus remarked that Julian was the only person the Alamanni feared after the death of Constans. In the early months of 343, he visited Britain, an event celebrated enough for Libanius to dedicate several sections of his panegyric to explaining it. Although the reasons for the visit remain unclear, the ancient writers were primarily interested in Constans' precarious journey to the province, rather than his actions within it. One theory considers it to have involved the northern frontier, based on Ammianus' remark that he had discussed the Areani in his now-lost coverage of Constans' reign. Additionally, after recording attacks "near the frontiers" in 360, the historian wrote that the Alamanni were too much of a threat for Julian to confront the problem, in contrast to what Constans was able to do. Constans was accused of employing corrupt ministers during his reign, due to his purported personal greed. One example included the "magister officiorum" (master of the offices) Flavius Eugenius, who remained in his position throughout most of the 340s. Despite Eugenius being alleged to have misused his power to seize property, the emperor continued to support him, his trust going as far as to honor him with a statue in the Forum of Trajan in Rome. Religion. Constans issued an edict banning superstition and pagan sacrifices in 341, his justification being that he was following the precedent set by his father. Only a short while later though, he tried to moderate his stance by legislating against the destruction of temple buildings. Constans' support of Nicene orthodoxy and the bishop Athanasius of Alexandria brought him into conflict with his brother Constantius. Although the two emperors called the Council of Serdica in 343 to settle the conflict, it was a complete failure, and by 345 Constans was outright threatening civil war against his brother. Eventually, Constantius agreed to allow Athanasius to return to his position, as the bishop's replacement had recently died. Constans also used the military to suppress Donatism in Africa, where the church was split between Donatists and Catholics. Alleged homosexuality. Unlike Constantius, Constans was targeted with gossip over his personal life. Numerous sources suspected him of homosexuality, presumably based on the fact that he never married. Aurelius Victor charged Constans with "rabid" pederasty towards young barbarian hostages, though Hunt remarked that "the allegation that he kept a coterie of captive barbarians to gratify his homosexual tastes sounds more like hostile folklore." Constans' legislation against homosexuality has been cited to dispute the rumor. Death. On 18 January 350, the general Magnentius declared himself emperor at Augustodunum (Autun) with the support of a number of court officials such as Marcellinus, Constans' comes rerum privatarum, as well as Fabius Titianus, who had previously served as the praetorian prefect of Gaul. At the time, Constans was distracted by a hunting trip. As he was trying to reach Hispania, supporters of Magnentius cornered him in a fortification in Helena (Elne) in the eastern Pyrenees of southwestern Gaul, where he was killed after seeking sanctuary in a temple. An alleged prophecy at his birth had said Constans would die "in the arms of his grandmother". His place of death happens to have been named after Helena, mother of Constantine and his own grandmother, thus realizing the prophecy. Constans' name would later be erased from inscriptions in places that recognized Magnentius as emperor. Regarding possible motives for Constans' overthrow, ancient sources assert that he was widely unpopular, and attribute his downfall to his own failings. Along with the accusation of corruption, he is also accused of neglecting portions of the empire and treating his soldiers with contempt. Ammianus lamented the emperor's failure to listen to wise counsel, referencing one man he believed could have saved Constans from his own faults. However, some modern scholars have questioned this portrayal. According to historian Jill Harries, "The detail that Constans was in the habit of making journeys with only a small escort may account for his vulnerability in 350." Based on several factors - the small number of people behind the plot, how the setting for Magnentius' coup was not a military centre, Vetranio's proclamation as emperor in opposition to Magnentius, and Julian's report that the usurper had to murder several of Constans' generals to take control of the Gallic army – she concluded that Magnentius' revolt was "the result of a private grudge on the part of an apprehensive official and not the outcome of widespread discontent among the military or the wider population." This view is supported by Peter Crawford, who considered the explanation from the ancient sources to be a misconception caused by the rapid success of the coup. Harries does, however, acknowledge how the Gallic army accepted Magnentius seemingly without difficulty, and how according to Zosimus, Constantius' official Philippus emphasized Constantine, rather than Constans, when addressing Magnentius' troops. On speculating the basis for Constans' overthrow, she suggested that one reason may have been regarding financial difficulties in Gaul by the end of his reign, which could have been related to the finance officer Marcellinus' support of him. After Magnentius took power, he levied taxes, sold imperial estates in Gaul and debased the coinage. Nicholas Baker-Brian also observed how Magnentius sent his brother Decentius to defend the region after Constans had neglected it, writing that, "it is apparent that among the reasons for Magnentius' rebellion was a desire to remedy Constans' governmental failings in Gaul." Family tree. Emperors are shown with a rounded-corner border with their dates as Augusti, names with a thicker border appear in both sections 1: Constantine's parents and half-siblings 2: Constantine's children
6749
332841
https://en.wikipedia.org/wiki?curid=6749
Cheerleading
Cheerleading is an activity in which the participants (called cheerleaders) cheer for their team as a form of encouragement. It can range from chanting slogans to intense physical activity. It can be performed to motivate sports teams, to entertain the audience, or for competition. Cheerleading routines typically range anywhere from one to three minutes, and contain components of tumbling, dance, jumps, cheers, and stunting. Cheerleading originated in the United States, where it has become a tradition. It is less prevalent in the rest of the world, except via its association with American sports or organized cheerleading contests. Modern cheerleading is very closely associated with American football and basketball. Sports such as association football (soccer), ice hockey, volleyball, baseball, and wrestling will sometimes sponsor cheerleading squads. The ICC Twenty20 Cricket World Cup in South Africa in 2007 was the first international cricket event to have cheerleaders. Some Brazilian association football (soccer) teams that plays in the Brazilian Serie A have cheerleading squads, such as Bahia, Fortaleza and Botafogo. In baseball, the Florida Marlins were the first Major League Baseball team to have a cheerleading team. Cheerleading originated as an all-male activity in the United States, and is popular predominantly in America, with an estimated 3.85 million participants in 2017. The global presentation of cheerleading was led by the 1997 broadcast of ESPN's International cheerleading competition, and the worldwide release of the 2000 film "Bring It On". The International Cheer Union (ICU) now claims 116 member nations with an estimated 7.5 million participants worldwide. Around the end of the 2000s, the sport had gained traction outside of the United States in countries like Australia, Canada, Mexico, China, Colombia, Finland, France, Germany, Japan, the Netherlands, New Zealand, and the United Kingdom. However, the sport does not have the international popularity of other American sports, such as baseball or basketball, despite efforts being made to popularize the sport at an international level. In 2016, the IOC (International Olympic Committee) recognized the ICU (International Cheer Union) as part of the sports federations; in practice this means that the modality is considered a sport by the IOC, and in the future, depending on negotiations and international popularization, it could become part of the Olympic Games. Scientific studies of cheerleading show that it carries the highest rate of catastrophic injuries to female athletes in sports, with most injuries associated with stunting, also known as pyramids. One 2011 study of American female athletes showed that cheerleading resulted in 65% of all catastrophic injuries in female sports. History. Before organized cheerleading. In the 1860s, students from Great Britain began to cheer and chant in unison for their favorite athletes at sporting events. Soon, that gesture of support crossed overseas to America. On November 6, 1869, the United States witnessed its first intercollegiate football game. It took place between Princeton University and Rutgers University, and marked the day the original "Sis Boom Rah!" cheer was shouted out by student fans. Beginning of organized cheerleading. Organized cheerleading began as an all-male activity. As early as 1877, Princeton University had a "Princeton Cheer", documented in the February 22, 1877, March 12, 1880, and November 4, 1881, issues of "The Daily Princetonian". This cheer was yelled from the stands by students attending games, as well as by the athletes themselves. The cheer, "Hurrah! Hurrah! Hurrah! Tiger! S-s-s-t! Boom! A-h-h-h!" remains in use with slight modifications today, where it is now referred to as the "Locomotive". Princeton class of 1882 graduate Thomas Peebles moved to Minnesota in 1884. He transplanted the idea of organized crowds cheering at football games to the University of Minnesota. The term "Cheer Leader" had been used as early as 1897, with Princeton's football officials having named three students as "Cheer Leaders:" Thomas, Easton, and Guerin from Princeton's classes of 1897, 1898, and 1899, respectively, on October 26, 1897. These students would cheer for the team also at football practices, and special cheering sections were designated in the stands for the games themselves for both the home and visiting teams. It was not until 1898 that University of Minnesota student Johnny Campbell directed a crowd in cheering "Rah, Rah, Rah! Ski-u-mah, Hoo-Rah! Hoo-Rah! Varsity! Varsity! Varsity, Minn-e-So-Tah!", making Campbell the very first cheerleader. November 2, 1898, is the official birth date of organized cheerleading. Soon after, the University of Minnesota organized a "yell leader" squad of six male students, who still use Campbell's original cheer today. Early 20th century cheerleading and female participation. In 1903, the first cheerleading fraternity, Gamma Sigma, was founded. In 1923, at the University of Minnesota, women were permitted to participate in cheerleading. However, it took time for other schools to follow. In the late 1920s, many school manuals and newspapers that were published still referred to cheerleaders as "chap", "fellow", and "man". Women cheerleaders were overlooked until the 1940s when collegiate men were drafted for World War II, creating the opportunity for more women to make their way onto sporting event sidelines. As noted by Kieran Scott in "Ultimate Cheerleading": "Girls really took over for the first time." In 1949, Lawrence Herkimer, a former cheerleader at Southern Methodist University and inventor of the herkie jump, founded his first cheerleading camp in Huntsville, Texas. 52 girls were in attendance. The clinic was so popular that Herkimer was asked to hold a second, where 350 young women were in attendance. Herkimer also patented the pom-pom. Growth in popularity (1950–1979). In 1951, Herkimer created the National Cheerleading Association to help grow the activity and provide cheerleading education to schools around the country. During the 1950s, female participation in cheerleading continued to grow. An overview written on behalf of cheerleading in 1955 explained that in larger schools, "occasionally boys as well as girls are included", and in smaller schools, "boys can usually find their place in the athletic program, and cheerleading is likely to remain solely a feminine occupation". Cheerleading could be found at almost every school level across the country; even pee wee and youth leagues began to appear. In the 1950s, professional cheerleading also began. The first recorded cheer squad in National Football League (NFL) history was for the Baltimore Colts. Professional cheerleaders put a new perspective on American cheerleading. Women were exclusively chosen for dancing ability as well as to conform to the male gaze, as heterosexual men were the targeted marketing group. By the 1960s, college cheerleaders employed by the NCA were hosting workshops across the nation, teaching fundamental cheer skills to tens of thousands of high-school-age girls. Herkimer also contributed many notable firsts to cheerleading: the founding of a cheerleading uniform supply company, inventing the herkie jump (where one leg is bent towards the ground as if kneeling and the other is out to the side as high as it will stretch in toe-touch position), and creating the "Spirit Stick". In 1965, Fred Gastoff invented the vinyl pom-pom, which was introduced into competitions by the International Cheerleading Foundation (ICF, now the World Cheerleading Association, or WCA). Organized cheerleading competitions began to pop up with the first ranking of the "Top Ten College Cheerleading Squads" and "Cheerleader All America" awards given out by the ICF in 1967. The Dallas Cowboys Cheerleaders soon gained the spotlight with their revealing outfits and sophisticated dance moves, debuting in the 1972–1973 season, but were first widely seen in Super Bowl X (1976). These pro squads of the 1970s established cheerleaders as "American icons of wholesome sex appeal." In 1975, Randy Neil estimated that over 500,000 students actively participated in American cheerleading from elementary school to the collegiate level. Neil also approximated that ninety-five percent of cheerleaders within America were female. In 1978, America was introduced to competitive cheerleading by the first broadcast of Collegiate Cheerleading Championships on CBS. 1980s to present. The 1980s saw the beginning of modern cheerleading, adding difficult stunt sequences and gymnastics into routines. All-star teams, or those not affiliated with a school, popped up, and eventually led to the creation of the U.S. All Star Federation (USASF). ESPN first broadcast the National High School Cheerleading Competition nationwide in 1983. By 1981, a total of seventeen Nation Football League teams had their own cheerleaders. The only teams without NFL cheerleaders at this time were New Orleans, New York, Detroit, Cleveland, Denver, Minnesota, Pittsburgh, San Francisco, and San Diego. Professional cheerleading eventually spread to soccer and basketball teams as well. Cheerleading organizations such as the American Association of Cheerleading Coaches and Advisors (AACCA), founded in 1987, started applying universal safety standards to decrease the number of injuries and prevent dangerous stunts, pyramids, and tumbling passes from being included in the cheerleading routines. In 2003, the National Council for Spirit Safety and Education (NCSSE) was formed to offer safety training for youth, school, all-star, and college coaches. The NCAA now requires college cheer coaches to successfully complete a nationally recognized safety-training program. Even with its athletic and competitive development, cheerleading at the school level has retained its ties to its spirit leading traditions. Cheerleaders are quite often seen as ambassadors for their schools, and leaders among the student body. At the college level, cheerleaders are often invited to help at university fundraisers and events. Debuting in 2003, the "Marlin Mermaids" gained national exposure, and have influenced other MLB teams to develop their own cheer/dance squads. In 2005, overall statistics show around 97% of all modern cheerleading participants are female, although at the collegiate level, cheerleading is co-ed with about 50% of participants being male. Modern male cheerleaders' stunts focus less on flexibility and more on tumbling, flips, pikes, and handstands. These depend on strong legs and strong core strength. In 2019, Napoleon Jinnies and Quinton Peron became the first male cheerleaders in the history of the NFL to perform at the Super Bowl. Safety regulation changes. Kristi Yamaoka, a cheerleader for Southern Illinois University, suffered a fractured vertebra when she hit her head after falling from a human pyramid. She also suffered from a concussion, and a bruised lung. The fall occurred when Yamaoka lost her balance during a basketball game between Southern Illinois University and Bradley University at the Savvis Center in St. Louis on March 5, 2006. The fall gained "national attention", because Yamaoka continued to perform from a stretcher as she was moved away from the game. The accident caused the Missouri Valley Conference to ban its member schools from allowing cheerleaders to be "launched or tossed and from taking part in formations higher than two levels" for one week during a women's basketball conference tournament, and also resulted in a recommendation by the NCAA that conferences and tournaments do not allow pyramids two and one half levels high or higher, and a stunt known as basket tosses, during the rest of the men's and women's basketball season. On July 11, 2006, the bans were made permanent by the AACCA rules committee: The committee unanimously voted for sweeping revisions to cheerleading safety rules, the most major of which restricts specific upper-level skills during basketball games. Basket tosses, high pyramids, one-arm stunts, stunts that involve twisting or flipping, and twisting tumbling skills may be performed only during halftime and post-game on a matted surface and are prohibited during game play or time-outs. Types of teams in the United States today. School-sponsored. Most American high schools and colleges, as well as a large number of middle schools, have organized cheerleading squads. Some colleges even offer cheerleading scholarships for students. A school cheerleading team may compete locally, regionally, or nationally, but their main purpose is typically to cheer for sporting events and encourage audience participation. Cheerleading can either be a year-round activity—with tryouts held during the spring semester and camps over the summer—or follow a more seasonal, scholastic program, with squads active only throughout a school's academic year for ceremonial occasions or for sideline support. In addition to the preexisting acrobatics-centered competitive format, since the early 2020s the novel Game Day format is gradually being introduced as a second pillar of competitive cheerleading. An increasing number of schools are forming dedicated Game Day squads, which are typically thematically (spirit symbolism-based uniforms, props, and choreographies) and operationally distinct from the previous competitive programs. Middle school. Middle school cheerleading evolved shortly after high school squads were created and is set at the district level. In middle school, cheerleading squads serve the same purpose, but often follow a modified set of rules from high school squads with possible additional rules. Squads can cheer for basketball teams, football teams, and other sports teams in their school. Squads may also perform at pep rallies and compete against other local schools from the area. Cheerleading in middle school sometimes can be a two-season activity: fall and winter. However, many middle school cheer squads will go year-round like high school squads. Middle school cheerleaders use the same cheerleading movements as their older counterparts, yet may perform less extreme stunts and tumbling elements, depending on the rules in their area.. High school. In high school, there are usually two squads per school: varsity and a junior varsity. High school cheerleading contains aspects of school spirit as well as competition. These squads have become part of a year-round cycle. Starting with tryouts in the spring, year-round practice, cheering on teams in the fall and winter, and participating in cheerleading competitions. Most squads practice at least three days a week for about two hours each practice during the summer. Many teams also attend separate tumbling sessions outside of practice. During the school year, cheerleading is usually practiced five- to six-days-a-week. During competition season, it often becomes seven days with practice twice a day sometimes. The school spirit aspect of cheerleading involves cheering, supporting, and "hyping up" the crowd at football games, basketball games, and even at wrestling meets. Along with this, cheerleaders usually perform at pep rallies, and bring school spirit to other students. In May 2009, the National Federation of State High School Associations released the results of their first true high school participation study. They estimated that the number of high school cheerleaders from public high schools is around 394,700. There are different cheerleading organizations that put on competitions; some of the major ones include state and regional competitions. Many high schools will often host cheerleading competitions, bringing in IHSA judges. The regional competitions are qualifiers for national competitions, such as the UCA (Universal Cheerleaders Association) in Orlando, Florida, every year. Many teams have a professional choreographer that choreographs their routine to ensure they are not breaking rules or regulations and to give the squad creative elements. College. Most American universities have a cheerleading squad to cheer for football, basketball, volleyball, wrestling, and soccer. Most college squads tend to be larger coed teams, although in recent years; all-girl squads and smaller college squads have increased rapidly. Cheerleading is not recognized by NCAA, NAIA, and NJCAA as athletics; therefore, there are few to no scholarships offered to athletes wanting to pursue cheerleading at the collegiate level. However, some community colleges and universities offer scholarships directly from the program or sponsorship funds. Some colleges offer scholarships for an athlete's talents, academic excellence, and/or involvement in community events. College squads perform more difficult stunts which include multi-level pyramids, as well as flipping and twisting basket tosses. Not only do college cheerleaders cheer on the other sports at their university, many teams at universities compete with other schools at either UCA College Nationals or NCA College Nationals. This requires the teams to choreograph a 2-minute and 30-second routine that includes elements of jumps, tumbling, stunting, basket tosses, pyramids, and a crowd involvement section. Winning one of these competitions is a very prestigious accomplishment, and is seen as another national title for most schools. Youth leagues and athletic associations. Organizations that sponsor youth cheer teams usually sponsor either youth league football or basketball teams as well. This allows for the two, under the same sponsor, to be intermingled. Both teams have the same mascot name and the cheerleaders will perform at their football or basketball games. Examples of such sponsors include Pop Warner, American Youth Football, and the YMCA. The purpose of these squads is primarily to support their associated football or basketball players, but some teams do compete at local or regional competitions. The Pop Warner Association even hosts a national championship each December for teams in their program who qualify. All-star or club cheerleading. "All-star" or club cheerleading differs from school or sideline cheerleading because all-star teams focus solely on performing a competition routine and not on leading cheers for other sports teams. All-star cheerleaders are members of a privately owned gym or club which they typically pay dues or tuition to, similar to a gymnastics gym. During the early 1980s, cheerleading squads not associated with a school or sports league, whose main objective was competition, began to emerge. The first organization to call themselves all-stars were the Q94 Rockers from Richmond, Virginia, founded in 1982. All-star teams competing prior to 1987 were placed into the same divisions as teams that represented schools and sports leagues. In 1986, the National Cheerleaders Association (NCA) addressed this situation by creating a separate division for teams lacking a sponsoring school or athletic association, calling it the All-Star Division and debuting it at their 1987 competitions. As the popularity of this type of team grew, more and more of them were formed, attending competitions sponsored by many different organizations and companies, each using its own set of rules, regulations, and divisions. This situation became a concern to coaches and gym owners, as the inconsistencies caused coaches to keep their routines in a constant state of flux, detracting from time that could be better used for developing skills and providing personal attention to their athletes. More importantly, because the various companies were constantly vying for a competitive edge, safety standards had become more and more lax. In some cases, unqualified coaches and inexperienced squads were attempting dangerous stunts as a result of these expanded sets of rules. The United States All Star Federation (USASF) was formed in 2003 by the competition companies to act as the national governing body for all star cheerleading and to create a standard set of rules and judging criteria to be followed by all competitions sanctioned by the Federation. Eager to grow the sport and create more opportunities for high-level teams, The USASF hosted the first Cheerleading Worlds on April 24, 2004. At the same time, cheerleading coaches from all over the country organized themselves for the same rule making purpose, calling themselves the National All Star Cheerleading Coaches Congress (NACCC). In 2005, the NACCC was absorbed by the USASF to become their rule making body. In late 2006, the USASF facilitated the creation of the International All-Star Federation (IASF), which now governs club cheerleading worldwide. All-star cheerleading, as sanctioned by the USASF, involves a squad of 5–36 females and males. All-star cheerleaders are placed into divisions, which are grouped based upon age, size of the team, gender of participants, and ability level. The age groups vary from under 4 years of age to 18 years and over. The squad prepares year-round for many different competition appearances, but they actually perform only for up to minutes during their team's routine. The numbers of competitions a team participates in varies from team to team, but generally, most teams tend to participate in six to ten competitions a year. These competitions include locals or regionals, which normally take place in school gymnasiums or local venues, nationals, hosted in large venues all around the U.S., and the Cheerleading Worlds, which takes place at Walt Disney World in Orlando, Florida. During a competition routine, a squad performs carefully choreographed stunting, tumbling, jumping, and dancing to their own custom music. Teams create their routines to an eight-count system and apply that to the music so that the team members execute the elements with precise timing and synchronization. All-star cheerleaders compete at competitions hosted by private event production companies, the foremost of these being Varsity Spirit. Varsity Spirit is the parent company for many subsidiaries including The National Cheerleader's Association, The Universal Cheerleader's Association, AmeriCheer, Allstar Challenge, and JamFest, among others. Each separate company or subsidiary typically hosts their own local and national level competitions. This means that many gyms within the same area could be state and national champions for the same year and never have competed against each other. Currently, there is no system in place that awards only one state or national title. Judges at a competition watch closely for illegal skills from the group or any individual member. Here, an illegal skill is something that is not allowed in that division due to difficulty or safety restrictions. They look out for deductions, or things that go wrong, such as a dropped stunt or a tumbler who does not stick a landing. More generally, judges look at the difficulty and execution of jumps, stunts and tumbling, synchronization, creativity, the sharpness of the motions, showmanship, and overall routine execution. If a level 6 or 7 team places high enough at selected USASF/IASF sanctioned national competitions, they could earn a place at the Cheerleading Worlds and compete against teams from all over the world, as well as receive money for placing. For elite level cheerleaders, The Cheerleading Worlds is the highest level of competition to which they can aspire, and winning a world championship title is an incredible honor. Professional. Professional cheerleaders and dancers cheer for sports such as football, basketball, baseball, wrestling, hockey, association football, rugby football, lacrosse, and cricket. There are only a small handful of professional cheerleading leagues around the world; some professional leagues include the NBA Cheerleading League, the NFL Cheerleading League, the CFL Cheerleading League, the MLS Cheerleading League, the MLB Cheerleading League, and the NHL Ice Girls. Although professional cheerleading leagues exist in multiple countries, there are no Olympic teams. In addition to cheering at games and competing, professional cheerleaders often do a lot of philanthropy and charity work, modeling, motivational speaking, television performances, and advertising. Injuries and accidents. Cheerleading carries the highest rate of catastrophic injuries to female athletes in high school and collegiate sports. Of the United States' 2.9 million female high school athletes, only 3% are cheerleaders, yet cheerleading accounts for nearly 65% of all catastrophic injuries in girls' high school athletics. In data covering the 1982–83 academic year through the 2018–19 academic year in the US, the rate of serious, direct traumatic injury per 100,000 participants was 1.68 for female cheerleaders at the high school level, the highest for all high school sports surveyed. The college rate could not be determined, as the total number of collegiate cheerleaders was unknown, but the total number of traumatic, direct catastrophic injuries over this period was 33 (28 female, 5 male), higher than all sports at this level aside from football. Another study found that between 1982 and 2007, there were 103 fatal, disabling, or serious injuries recorded among female high school athletes, with the vast majority (67) occurring in cheerleading. The main source of injuries comes from stunting, also known as pyramids. These stunts are performed at games and pep rallies, as well as competitions. Sometimes competition routines are focused solely around the use of difficult and risky stunts. These stunts usually include a flyer (the person on top), along with one or two bases (the people on the bottom), and one or two spotters in the front and back on the bottom. The most common cheerleading related injury is a concussion. 96% of those concussions are stunt related. Other injuries include: sprained ankles, sprained wrists, back injuries, head injuries (sometimes concussions), broken arms, elbow injuries, knee injuries, broken noses, and broken collarbones. Sometimes, however, injuries can be as serious as whiplash, broken necks, broken vertebrae, and death. The journal "Pediatrics" has reportedly said that the number of cheerleaders suffering from broken bones, concussions, and sprains has increased by over 100 percent between the years of 1990 and 2002, and that in 2001, there were 25,000 hospital visits reported for cheerleading injuries dealing with the shoulder, ankle, head, and neck. Meanwhile, in the US, cheerleading accounted for 65.1% of all major physical injuries to high school females, and to 66.7% of major injuries to college students due to physical activity from 1982 to 2007, with 22,900 minors being admitted to hospital with cheerleading-related injuries in 2002. The risks of cheerleading were highlighted at the death of Lauren Chang. Chang died on April 14, 2008, after competing in a competition where her teammate had kicked her so hard in the chest that her lungs collapsed. Cheerleading (for both girls and boys) was one of the sports studied in the Pediatric Injury Prevention, Education and Research Program of the Colorado School of Public Health in 2009/10–2012/13. Data on cheerleading injuries is included in the report for 2012–13. Associations, federations, and organizations. International Cheer Union (ICU): Established on April 26, 2004, the ICU is recognized by the SportAccord as the world governing body of cheerleading and the authority on all matters with relation to it. Including participation from its 105-member national federations reaching 3.5 million athletes globally, the ICU continues to serve as the unified voice for those dedicated to cheerleading's positive development around the world. Following a positive vote by the SportAccord General Assembly on May 31, 2013, in Saint Petersburg, the International Cheer Union (ICU) became SportAccord's 109th member, and SportAccord's 93rd international sports federation to join the international sports family. In accordance with the SportAccord statutes, the ICU is recognized as the world governing body of cheerleading and the authority on all matters related to it. The ICU has introduced a Junior aged team (12–16) to compete at the Cheerleading Worlds, because cheerleading is now in provisional status to become a sport in the Olympics. For cheerleading to one day be in the Olympics, there must be a junior and senior team that competes at the world championships. The first junior cheerleading team that was selected to become the junior national team was Eastside Middle School, located in Mount Washington Kentucky and will represent the United States in the inaugural junior division at the world championships. The ICU holds training seminars for judges and coaches, global events and the World Cheerleading Championships. The ICU is also fully applied to the International Olympic Committee (IOC) and is compliant under the code set by the World Anti-Doping Agency (WADA). International Federation of Cheerleading (IFC): Established on July 5, 1998, the International Federation of Cheerleading (IFC) is a non-profit federation based in Tokyo, Japan, and is a world governing body of cheerleading, primarily in Asia. The IFC objectives are to promote cheerleading worldwide, to spread knowledge of cheerleading, and to develop friendly relations among the member associations and federations. USA Cheer: The USA Federation for Sport Cheering (USA Cheer) was established in 2007 to serve as the national governing body for all types of cheerleading in the United States and is recognized by the ICU. "The USA Federation for Sport Cheering is a not-for profit 501(c)(6) organization that was established in 2007 to serve as the National Governing Body for Sport Cheering in the United States. USA Cheer exists to serve the cheer community, including club cheering (all star) and traditional school based cheer programs, and the growing sport of STUNT. USA Cheer has three primary objectives: help grow and develop interest and participation in cheer throughout the United States; promote safety and safety education for cheer in the United States; and represent the United States of America in international cheer competitions." In March 2018, they absorbed the American Association of Cheerleading Coaches and Advisors (AACCA) and now provide safety guidelines and training for all levels of cheerleading. Additionally, they organize the USA National Team. Universal Cheerleading Association: UCA is an association owned by the company brand Varsity. "Universal Cheerleaders Association was founded in 1974 by Jeff Webb to provide the best educational training for cheerleaders with the goal of incorporating high-level skills with traditional crowd leading. It was Jeff's vision that would transform cheerleading into the dynamic, athletic combination of high energy entertainment and school leadership that is loved by so many." "Today, UCA is the largest cheerleading camp company in the world, offering the widest array of dates and locations of any camp company. We also celebrate cheerleader's incredible hard work and athleticism through the glory of competition at over 50 regional events across the country and our Championships at the Walt Disney World Resort every year." "UCA has instilled leadership skills and personal confidence in more than 4.5 million athletes on and off the field while continuing to be the industry's leader for more than forty-five years. UCA has helped many cheerleaders get the training they need to succeed. Competitions and companies. Asian Thailand Cheerleading Invitational (ATCI): Organised by the Cheerleading Association of Thailand (CAT) in accordance with the rules and regulations of the International Federation of Cheerleading (IFC). The ATCI is held every year since 2009. At the ATCI, many teams from all over Thailand compete, joining them are many invited neighbouring nations who also send cheer squads. Cheerleading Asia International Open Championships (CAIOC): Hosted by the Foundation of Japan Cheerleading Association (FJCA) in accordance with the rules and regulations of the IFC. The CAIOC has been a yearly event since 2007. Every year, many teams from all over Asia converge in Tokyo to compete. Cheerleading World Championships (CWC): Organised by the IFC. The IFC is a non-profit organisation founded in 1998 and based in Tokyo, Japan. The CWC has been held every two years since 2001, and to date, the competition has been held in Japan, the United Kingdom, Finland, Germany, and Hong Kong. The 6th CWC was held at the Hong Kong Coliseum on November 26–27, 2011. ICU World Championships: The International Cheer Union currently encompasses 105 National Federations from countries across the globe. Every year, the ICU host the World Cheerleading Championship. This competition uses a more collegiate style performance and rulebook. Countries assemble and send only one team to represent them. National Cheerleading Championships (NCC): The NCC is the annual IFC-sanctioned national cheerleading competition in Indonesia organised by the Indonesian Cheerleading Community (ICC). Since NCC 2010, the event is now open to international competition, representing a significant step forward for the ICC. Teams from many countries such as Japan, Thailand, the Philippines, and Singapore participated in the ground breaking event. Pan-American Cheerleading Championships (PCC): The PCC was held for the first time in 2009 in the city of Latacunga, Ecuador and is the continental championship organised by the Pan-American Federation of Cheerleading (PFC). The PFC, operating under the umbrella of the IFC, is the non-profit continental body of cheerleading whose aim it is to promote and develop cheerleading in the Americas. The PCC is a biennial event, and was held for the second time in Lima, Peru, in November 2010. USASF/IASF Worlds: Many United States cheerleading organizations form and register the not-for-profit entity the United States All Star Federation (USASF) and also the International All Star Federation (IASF) to support international club cheerleading and the World Cheerleading Club Championships. The first World Cheerleading Championships, or Cheerleading Worlds, were hosted by the USASF/IASF at the Walt Disney World Resort and taped for an ESPN global broadcast in 2004. This competition is only for All-Star/Club cheer. Only level 6 and 7 teams may attend and must receive a bid from a partner company. Varsity: Varsity Spirit, a branch of Varsity Brands is a parent company which, over the past 10 years, has absorbed or bought most other cheerleading event production companies. The following is a list of subsidiary competition companies owned by Varsity Spirit: Title IX sports status. In the United States, the designation of a "sport" is important because of Title IX. There is a large debate on whether or not cheerleading should be considered a sport for Title IX (a portion of the United States Education Amendments of 1972 forbidding discrimination under any education program on the basis of sex) purposes. These arguments have varied from institution to institution and are reflected in how they treat and organize cheerleading within their schools. Some institutions have been accused of not providing equal opportunities to their male students or for not treating cheerleading as a sport, which reflects on the opportunities they provide to their athletes. The Office for Civil Rights (OCR) issued memos and letters to schools that cheerleading, both sideline and competitive, may not be considered "athletic programs" for the purposes of Title IX. Supporters consider cheerleading, as a whole, a sport, citing the heavy use of athletic talents while critics see it as a physical activity because a "sport" implies a competition among all squads and not all squads compete, along with subjectivity of competitions where—as with gymnastics, diving, and figure skating—scores are assessed based on human judgment and not an objective goal or measurement of time. The Office for Civil Rights' primary concern was ensuring that institutions complied with Title IX, which means offering equal opportunities to all students despite their gender. In their memos, their main point against cheerleading being a sport was that the activity is underdeveloped and unorganized to have varsity-level athletic standing among students. This claim was not universal and the Office for Civil Rights would review cheerleading on a case-by-case basis. Due to this the status of cheerleading under Title IX has varied from region to region based on the institution and how they organize their teams. However, within their decisions, the Office for Civil Rights never clearly stated any guidelines on what was and was not considered a sport under Title IX. On January 27, 2009, in a lawsuit involving an accidental injury sustained during a cheerleading practice, the Wisconsin Supreme Court ruled that cheerleading is a full-contact sport in that state, not allowing any participants to be sued for accidental injury. In contrast, on July 21, 2010, in a lawsuit involving whether college cheerleading qualified as a sport for purposes of Title IX, a federal court, citing a current lack of program development and organization, ruled that it is not a sport at all. The National Collegiate Athletic Association (NCAA) does not recognize cheerleading as a sport. In 2014, the American Medical Association adopted a policy that, as the leading cause of catastrophic injuries of female athletes both in high school and college, cheerleading should be considered a sport. While there are cheerleading teams at the majority of the NCAA's Division I schools, they are still not recognized as a sport. This results in many teams not being properly funded. Additionally, there are little to no college programs offering scholarships because their universities cannot offer athletic scholarships to "spirit" team members. Title IX Guidelines for Sports ports. In 2010, Quinnipiac University was sued for not providing equal opportunities for female athletes as required by Title IX. The university disbanded its volleyball team and created a new competitive cheerleading sports team. The issue with Biediger v. Quinnipiac University is centered around whether competitive cheerleading could be considered a sport for Title IX. The university had not provided additional opportunities for their female athletes which led to the court ruling in favor that cheerleading could not count as a varsity sport. This case established clear guidelines on what qualifies as a sport under Title IX, these guidelines are known as the three-pronged approach. The three-pronged approach is as follows: The three-pronged approach was the first official guideline that clearly stated what criteria were necessary when deciding what activity was considered a sport or not under Title IX. This approach was used and is still continued to be used by the Office for Civil Rights. Based on this approach the Office for Civil Rights still considers cheerleading, including both sideline and competitive, not a sport under Title IX. Cheerleading in Canada. Cheerleading in Canada is rising in popularity among the youth in co-curricular programs. Cheerleading has grown from the sidelines to a competitive activity throughout the world and in particular Canada. Cheerleading has a few streams in Canadian sports culture. It is available at the middle-school, high-school, collegiate, and best known for all-star. There are multiple regional, provincial, and national championship opportunities for all athletes participating in cheerleading. Canada does not have provincial teams, just a national program referred to as Team Canada, facilitated by Cheer Canada. Their first year as a national team was in 2009 when they represented Canada at the International Cheer Union World Cheerleading Championships International Cheer Union (ICU). Competition and governance in Canada. Cheer Canada acts as the Canadian national governing body for cheer, as recognised by the International Cheer Union. There are a number of provincial sports organizations that also exist in Canada under Cheer Canada, each governing cheer within their province which BC Sport Cheer, Alberta Cheerleading Association, Saskatchewan Cheerleading Association, Cheer Manitoba, Ontario Cheerleading Federation, Federation de Cheerleading du Quebec, Newfoundland and Labrador Cheerleading Athletics, Cheer New Brunswick and Cheer Nova Scotia. Cheer Canada and the provincial organizations use the IASF divisions and rules for all star cheer and performance cheer (all star dance) and the ICU divisions and rules for scholastic cheer. Canadian Cheer (previously known as Cheer Evolution) is the largest cheer and dance organization for Canada, and currently comply to Cheer Canada's rules and guidelines for their 15 events. Varsity Spirit also hosts events within Canada using the Cheer Canada/IASF rules. There are currently over 400 clubs and schools recognised by Cheer Canada, with over 25,000 participants in 2023. Canadian cheer on the global stage. There are two world championship competitions that Canada participates in. The first is the ICU World Championships where the Canadian National Teams compete against other countries. The second is The Cheerleading Worlds where Canadian club teams, referred to as "all-star" teams, compete within the IASF divisions. National team members who compete at the ICU Worlds can also compete with their "all-star club" teams at the IASF World Championships. Although athletes can compete in both International Cheer Union (ICU) and IASF championships, crossovers between teams at each individual competition are not permitted. Teams compete against the other teams from their countries on the first day of competition and the top three teams from each country in each division continue to finals. At the end of finals, the top team scoring the highest for their country earns the "Nations Cup". Canada has multiple teams across their country that compete in the IASF Cheerleading Worlds Championship. In total, Canada has had 98 International podium finishes at cheer events. The International Cheer Union (ICU) is built of 119 member nations, who are eligible to field teams to compete against one another at the ICU World Championships in a number of divisions in both cheerleading and performance cheer, with special divisions for youth, junior and adaptive abilities athletes. Cheer Canada fields a national team, with up to 40 athletes from around the country for both a senior national all girl and senior national coed team training at three training camps across the season in Canada before 28 athletes per team are selected to train in Florida, with 24 athletes going on to compete on the competition floor at ICU Worlds. In the 2023 ICU World Championships, Canada won a total of 4 medals (1 gold and 3 silver) with teams entered in the Youth All Girl, Youth Coed, Unified Median, Unified Advanced, Premier All Girl, Premier Coed, Performance Cheer Hip Hop doubles, Performance Cheer Pom Doubles and Performance Cheer Pom divisions. In total, Team Canada holds podium placements at the ICU World Championships from the following years/divisions: Cheerleading in Mexico. Cheerleading in Mexico is a popular sport commonly seen in Mexican College Football and Professional Mexican Soccer sporting events. Cheerleading emerged within the National Autonomous University of Mexico (UNAM), the highest House of Studies in the country, during the 1930s, almost immediately after it was granted its autonomy. Since then, this phenomenon has been evolving to become what it is now. Firstly, it was developed only in the UNAM, later in other secondary and higher education institutions in Mexico City, and currently in practically the entire country. Competition in Mexico. In Mexico, this sport is endorsed by the Mexican Federation of Cheerleaders and Cheerleading Groups (Federación Mexicana de Porristas y Grupos de Animación) (FMPGA), a body that regulates competitions in Mexico and subdivisions such as the Olympic Confederation of Cheerleaders (COP Brands), National Organization of Cheerleaders (Organización Nacional de Porristas) (ONP) and the Mexican Organization of Trainers and Animation Groups (Organización Mexicana de Entrenadores y Grupos de Animación) (OMEGA Mexico), these being the largest in the country. In 2021, the third edition of the National Championship of State Teams was held and organized by The Mexican Federation of Cheerleaders and Cheerleading Groups, on this occasion, the event was held virtually, broadcasting live, through the Vimeo platform. Mexican Cheer of the Global stage. In Mexico there are more than 500 teams and almost 10,000 athletes who practice this sport, in addition to a representative national team of Mexico, which won first place in the cheerleading world championship organized by the ICU (International Cheer Union) on April 24, 2015, receiving a gold medal; In 2016, Mexico became the second country with the most medals in the world in this sport. With 27 medals, it is considered the second world power in this sport, only behind the United States. In the 2019 Coed Premier World Cheerleading Championship Mexico ranked 4th just behind the United States, Canada and Taiwan. In 2021, the Mexican team won 3rd place at the Junior Boom category in World Cheerleading Championship 2021 hosted by international cheerleading federation. Cheerleading in the United Kingdom. This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in the United Kingdom. This can be used to compare and contrast the activity in the U.S. and in Australia. Cheerleading in Australia. This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in Australia. This can be used to compare and contrast the activity in the U.S. and in Australia. Notable former cheerleaders. This section has a link to a separate Wikipedia page that lists former cheerleaders and well-known cheerleading squads.
6751
47492335
https://en.wikipedia.org/wiki?curid=6751
Cottingley Fairies
The Cottingley Fairies are the subject of a hoax which purports to provide evidence of the existence of fairies. They appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of "The Strand Magazine". Doyle was enthusiastic about the photographs, and interpreted them as clear and visible evidence of supernatural phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked. Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, and yet the photographs continued to hold the public imagination. In 1966 a reporter from the "Daily Express" newspaper traced Elsie, who had by then returned to the United Kingdom. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story. In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. As of 2019 the photographs and the cameras used are in the collections of the National Science and Media Museum in Bradford, England. 1917 photographs. In mid-1917 nine-year-old Frances Griffiths and her motherboth newly arrived in England from South Africawere staying with Frances's aunt, Elsie Wright's mother, Polly, in the village of Cottingley in West Yorkshire; Elsie was then 16 years old. The two girls often played together beside the beck at the bottom of the garden, much to their mothers' annoyance, because they frequently came back with wet feet and clothes. Frances and Elsie said they only went to the beck to see the fairies, and to prove it, Elsie borrowed her father's camera, a Midg quarter-plate. The girls returned about 30 minutes later, "triumphant". Elsie's father, Arthur, was a keen amateur photographer, and had set up his own darkroom. The picture on the photographic plate he developed showed Frances behind a bush in the foreground, on which four fairies appeared to be dancing. Knowing his daughter's artistic ability, and that she had spent some time working in a photographer's studio, he dismissed the figures as cardboard cutouts. Two months later the girls borrowed his camera again, and this time returned with a photograph of Elsie sitting on the lawn holding out her hand to a gnome. Exasperated by what he believed to be "nothing but a prank", and convinced that the girls must have tampered with his camera in some way, Arthur Wright refused to lend it to them again. His wife Polly, however, believed the photographs to be authentic. Towards the end of 1918, Frances sent a letter to Johanna Parvin, a friend in Cape Town, South Africa, where Frances had lived for most of her life, enclosing the photograph of herself with the fairies. On the back she wrote "It is funny, I never used to see them in Africa. It must be too hot for them there." The photographs became public in mid-1919, after Elsie's mother attended a meeting of the Theosophical Society in Bradford. The lecture that evening was on "fairy life", and at the end of the meeting Polly Wright showed the two fairy photographs taken by her daughter and niece to the speaker. As a result, the photographs were displayed at the society's annual conference in Harrogate, held a few months later. There they came to the attention of a leading member of the society, Edward Gardner. One of the central beliefs of theosophy is that humanity is undergoing a cycle of evolution, towards increasing "perfection", and Gardner recognised the potential significance of the photographs for the movement: Initial examinations. Gardner sent the prints along with the original glass-plate negatives to Harold Snelling, a photography expert. Snelling's opinion was that "the two negatives are entirely genuine, unfaked photographs ... [with] no trace whatsoever of studio work involving card or paper models". He did not go so far as to say that the photographs showed fairies, stating only that "these are straight forward photographs of whatever was in front of the camera at the time". Gardner had the prints "clarified" by Snelling, and new negatives produced, "more conducive to printing", for use in the illustrated lectures he gave around Britain. Snelling supplied the photographic prints which were available for sale at Gardner's lectures. Author and prominent spiritualist Sir Arthur Conan Doyle learned of the photographs from the editor of the spiritualist publication "Light". Doyle had been commissioned by "The Strand Magazine" to write an article on fairies for their Christmas issue, and the fairy photographs "must have seemed like a godsend" according to broadcaster and historian Magnus Magnusson. Doyle contacted Gardner in June 1920 to determine the background to the photographs, and wrote to Elsie and her father to request permission from the latter to use the prints in his article. Arthur Wright was "obviously impressed" that Doyle was involved, and gave his permission for publication, but he refused payment on the grounds that, if genuine, the images should not be "soiled" by money. Gardner and Doyle sought a second expert opinion from the photographic company Kodak. Several of the company's technicians examined the enhanced prints, and although they agreed with Snelling that the pictures "showed no signs of being faked", they concluded that "this could not be taken as conclusive evidence ... that they were authentic photographs of fairies". Kodak declined to issue a certificate of authenticity. Gardner believed that the Kodak technicians might not have examined the photographs entirely objectively, observing that one had commented "after all, as fairies couldn't be true, the photographs must have been faked somehow". The prints were also examined by another photographic company, Ilford, who reported unequivocally that there was "some evidence of faking". Gardner and Doyle, perhaps rather optimistically, interpreted the results of the three expert evaluations as two in favour of the photographs' authenticity and one against. Doyle also showed the photographs to the physicist and pioneering psychical researcher Sir Oliver Lodge, who believed the photographs to be fake. He suggested that a troupe of dancers had masqueraded as fairies, and expressed doubt as to their "distinctly 'Parisienne hairstyles. On 4 October 2018 the first two of the photographs, "Alice and the Fairies" and "Iris and the Gnome," were to be sold by Dominic Winter Auctioneers, in Gloucestershire. The prints, suspected to have been made in 1920 to sell at theosophical lectures, were expected to bring £700–£1000 each. As it turned out, "Iris with the Gnome" sold for a hammer price of £5,400 (plus 24% buyer's premium incl. VAT), while "Alice and the Fairies" sold for a hammer price of £15,000 (plus 24% buyer's premium incl. VAT). 1920 photographs. Doyle was preoccupied with organising an imminent lecture tour of Australia, and in July 1920, sent Gardner to meet the Wright family. By this point, Frances was living with her parents in Scarborough, but Elsie's father told Gardner that he had been so certain the photographs were fakes that while the girls were away he searched their bedroom and the area around the beck (stream), looking for scraps of pictures or cutouts, but found nothing "incriminating". Gardner believed the Wright family to be honest and respectable. To place the matter of the photographs' authenticity beyond doubt, he returned to Cottingley at the end of July with two W. Butcher & Sons Cameo folding plate cameras and 24 secretly marked photographic plates. Frances was invited to stay with the Wright family during the school summer holiday so that she and Elsie could take more pictures of the fairies. Gardner described his briefing in his 1945 "Fairies: A Book of Real Fairies": Until 19 August the weather was unsuitable for photography. Because Frances and Elsie insisted that the fairies would not show themselves if others were watching, Elsie's mother was persuaded to visit her sister's for tea, leaving the girls alone. In her absence the girls took several photographs, two of which appeared to show fairies. In the first, "Frances and the Leaping Fairy", Frances is shown in profile with a winged fairy close by her nose. The second, "Fairy offering Posy of Harebells to Elsie", shows a fairy either hovering or tiptoeing on a branch, and offering Elsie a flower. Two days later the girls took the last picture, "Fairies and Their Sun-Bath". The plates were packed in cotton wool and returned to Gardner in London, who sent an "ecstatic" telegram to Doyle, by then in Melbourne. Doyle wrote back: Publication and reaction. Doyle's article in the December 1920 issue of "The Strand" contained two higher-resolution prints of the 1917 photographs, and sold out within days of publication. To protect the girls' anonymity, Frances and Elsie were called Alice and Iris respectively, and the Wright family was referred to as the "Carpenters". An enthusiastic and committed spiritualist, Doyle hoped that if the photographs convinced the public of the existence of fairies then they might more readily accept other psychic phenomena. He ended his article with the words: Early press coverage was "mixed", generally a combination of "embarrassment and puzzlement"; though Japanese scholar Kaori Inuma has noted that there were also open and positive assessments. The historical novelist and poet Maurice Hewlett published a series of articles in the literary journal "John O' London's Weekly", in which he concluded: "And knowing children, and knowing that Sir Arthur Conan Doyle has legs, I decide that the Miss Carpenters have pulled one of them." The London newspaper "Truth" on 5 January 1921 expressed a similar view; "For the true explanation of these fairy photographs what is wanted is not a knowledge of occult phenomena but a knowledge of children." Some public figures were more sympathetic. Margaret McMillan, the educational and social reformer, wrote: "How wonderful that to these dear children such a wonderful gift has been vouchsafed." The novelist Henry De Vere Stacpoole decided to take the fairy photographs and the girls at face value. In a letter to Gardner he wrote: "Look at Alice's [Frances'] face. Look at Iris's [Elsie's] face. There is an extraordinary thing called Truth which has 10 million faces and forms – it is God's currency and the cleverest coiner or forger can't imitate it." Major John Hall-Edwards, a keen photographer and pioneer of medical X-ray treatments in Britain, was a particularly vigorous critic: Doyle used the later photographs in 1921 to illustrate a second article in "The Strand", in which he described other accounts of fairy sightings. The article formed the foundation for his 1922 book "The Coming of the Fairies". As before, the photographs were received with mixed credulity. Sceptics noted that the fairies "looked suspiciously like the traditional fairies of nursery tales" and that they had "very fashionable hairstyles". Gardner's final visit. Gardner made a final visit to Cottingley in August 1921. He again brought cameras and photographic plates for Frances and Elsie, but was accompanied by the occultist Geoffrey Hodson. Although neither of the girls claimed to see any fairies, and there were no more photographs, "on the contrary, he [Hodson] saw them [fairies] everywhere" and wrote voluminous notes on his observations. By now Elsie and Frances were tired of the whole fairy business. Years later Elsie looked at a photograph of herself and Frances taken with Hodson and said: "Look at that, fed up with fairies." Both Elsie and Frances later admitted that they "played along" with Hodson "out of mischief", and that they considered him "a fake". Later investigations. Public interest in the Cottingley Fairies gradually subsided after 1921. Elsie and Frances both eventually married, moved away from the area and each lived overseas for varying periods of time. In 1966, a reporter from the "Daily Express" newspaper traced Elsie, who was by then back in England. She admitted in an interview given that year that the fairies might have been "figments of my imagination", but left open the possibility she believed that she had somehow managed to photograph her thoughts. The media subsequently became interested in Frances and Elsie's photographs once again. BBC television's "Nationwide" programme investigated the case in 1971, but Elsie stuck to her story: "I've told you that they're photographs of figments of our imagination, and that's what I'm sticking to". Elsie and Frances were interviewed by journalist Austin Mitchell in September 1976, for a programme broadcast on Yorkshire Television. When pressed, both women agreed that "a rational person doesn't see fairies", but they denied having fabricated the photographs. In 1978 the magician and scientific sceptic James Randi and a team from the Committee for the Scientific Investigation of Claims of the Paranormal examined the photographs, using a "computer enhancement process". They concluded that the photographs were fakes, and that strings could be seen supporting the fairies. Geoffrey Crawley, editor of the "British Journal of Photography", undertook a "major scientific investigation of the photographs and the events surrounding them", published between 1982 and 1983, "the first major postwar analysis of the affair". He also concluded that the pictures were fakes. Confession. In 1983, the cousins admitted in an article published in the magazine "The Unexplained" that the photographs had been faked, although both maintained that they really had seen fairies. Elsie had copied illustrations of three dancing fairies by Claude Shepperson from a book that Frances had brought back with her from South Africa. This was the "Princess Mary's Gift Book", published towards the beginning of the war. Elsie changed few details but added wings. It is possible that a poem attached to the images by Alfred Noyes also inspired Elsie and Frances. They said they had then cut out the cardboard figures and supported them with hatpins, disposing of their props in the beck once the photograph had been taken. But the cousins disagreed about the fifth and final photograph, which Doyle in his "The Coming of the Fairies" described in this way: Elsie maintained it was a fake, just like all the others, but Frances insisted that it was genuine. In an interview given in the early 1980s Frances said: Both Frances and Elsie claimed to have taken the fifth photograph. In a letter published in "The Times" newspaper on 9 April 1983, Geoffrey Crawley explained the discrepancy by suggesting that the photograph was "an unintended double exposure of fairy cutouts in the grass", and thus "both ladies can be quite sincere in believing that they each took it". In a 1985 interview on Yorkshire Television's "Arthur C. Clarke's World of Strange Powers", Elsie said that she and Frances were too embarrassed to admit the truth after fooling Doyle, the author of Sherlock Holmes: "Two village kids and a brilliant man like Conan Doyle – well, we could only keep quiet." In the same interview Frances said: "I never even thought of it as being a fraud – it was just Elsie and I having a bit of fun and I can't understand to this day why they were taken in – they wanted to be taken in." Subsequent history. Frances died in 1986, and Elsie in 1988. Prints of their photographs of the fairies, along with a few other items including a first edition of Doyle's book "The Coming of the Fairies", were sold at auction in London for £21,620 in 1998. That same year, Geoffrey Crawley sold his Cottingley Fairy material to the National Museum of Film, Photography and Television in Bradford (now the National Science and Media Museum), where it is on display. The collection included prints of the photographs, two of the cameras used by the girls, watercolours of fairies painted by Elsie, and a nine-page letter from Elsie admitting to the hoax. The glass photographic plates were bought for £6,000 by an unnamed buyer at a London auction held in 2001. Frances's daughter, Christine Lynch, appeared in an episode of the television programme "Antiques Roadshow" in Belfast, broadcast on BBC One in January 2009, with the photographs and one of the cameras given to the girls by Doyle. Christine told the expert, Paul Atterbury, that she believed, as her mother had done, that the fairies in the fifth photograph were genuine. Atterbury estimated the value of the items at between £25,000 and £30,000. The first edition of Frances's memoirs was published a few months later, under the title "Reflections on the Cottingley Fairies". The book contains correspondence, sometimes "bitter", between Elsie and Frances. In one letter, dated 1983, Frances wrote: The 1997 films "" and "Photographing Fairies" were inspired by the events surrounding the Cottingley Fairies. The photographs were parodied in a 1994 book written by Terry Jones and Brian Froud, "Lady Cottington's Pressed Fairy Book". In A. J. Elwood's 2021 novel, "The Cottingley Cuckoo", a series of letters were written soon after the Cottingley fairy photographs were published claiming further sightings of fairies and proof of their existence. In 2017 a further two fairy photographs were presented as evidence that the girls' parents were part of the conspiracy. Dating from 1917 and 1918, both photographs are poorly executed copies of two of the original fairy photographs. One was published in 1918 in "The Sphere" newspaper, which was before the originals had been seen by anyone outside the girls' immediate family. In 2019, a print of the first of the five photographs sold for £1,050. A print of the second was also put up for sale but failed to sell as it did not meet its £500 reserve price. The pictures previously belonged to the Reverend George Vale Owen. In December 2019, the third camera used to take the images was acquired by the National Science and Media Museum.
6752
1300886809
https://en.wikipedia.org/wiki?curid=6752
Cheka
The All-Russian Extraordinary Commission (), abbreviated as VChK (), and commonly known as the Cheka (), was the first Soviet secret police organization. It was established on by the Council of People's Commissars of the Russian SFSR, and was led by Felix Dzerzhinsky. By the end of the Russian Civil War in 1921, the Cheka had at least 200,000 personnel. Ostensibly created to protect the October Revolution from "class enemies" such as the bourgeoisie and members of the clergy, the Cheka soon became a tool of repression wielded against all political opponents of the Bolshevik regime. The organization had responsibility for counterintelligence, oversight of the loyalty of the Red Army, and protection of the country's borders, as well as the collection of human and technical intelligence. At the direction of Vladimir Lenin, the Cheka performed mass arrests, imprisonments, torture, and executions without trial in what came to be known as the "Red Terror". It policed the Gulag system of labor camps, conducted requisitions of food, and put down rebellions by workers and peasants. The Cheka was responsible for executing at least 50,000 to as many as 200,000 people, though estimates vary widely. The Cheka, the first in a long succession of Soviet secret police agencies, established the security service as a major player in Soviet politics. It was dissolved in February 1922, and succeeded by the State Political Directorate (GPU). Throughout the Soviet era, members of the secret police were referred to as "Chekists". Name. The official designation was All-Russian Extraordinary (or Emergency) Commission for Combating Counter-Revolution and Sabotage under the Council of People's Commissars of the RSFSR (, "Vserossiyskaya chrezvychaynaya komissiya po borbe s kontrrevolyutsiyey i sabotazhem pri Sovete narodnykh komisarov RSFSR"). In 1918, its name was changed, becoming All-Russian Extraordinary Commission for Combating Counter-Revolution, Profiteering and Corruption. A member of Cheka was called a "chekist" (). Also, the term "chekist" often referred to Soviet secret police throughout the Soviet period, despite official name changes over time. In "The Gulag Archipelago", Alexander Solzhenitsyn recalls that "zeks" in the labor camps used "old chekist" as a mark of special esteem for particularly experienced camp administrators. The term is still found in use in Russia today (for example, President Vladimir Putin has been referred to in the Russian media as a "chekist" due to his career in the KGB and as head of the KGB's successor, FSB). The Chekists commonly dressed in black leather, including long flowing coats, reportedly after being issued such distinctive coats early in their existence. Western communists adopted this clothing fashion. The Chekists also often carried with them Greek-style worry beads made of amber, which had become "fashionable among high officials during the time of the 'cleansing. History. In 1921, the Troops for the Internal Defense of the Republic (a branch of the Cheka) numbered at least 200,000. These troops policed labor camps, ran the Gulag system, conducted requisitions of food, and subjected political opponents to secret arrest, detention, torture and summary execution. They also put down rebellions and riots by workers or peasants, and mutinies in the desertion-plagued Red Army. After 1922, Cheka groups underwent the first of a series of reorganizations; however the theme of a government dominated by "the organs" persisted indefinitely afterward, and Soviet citizens continued to refer to members of the various organs as Chekists. Creation. In the first month and a half after the October Revolution (1917), the duty of "extinguishing the resistance of exploiters" was assigned to the Petrograd Military Revolutionary Committee (or PVRK). It represented a temporary body working under directives of the Council of People's Commissars (Sovnarkom) and Central Committee of RDSRP(b). The VRK created new bodies of government, organized food delivery to cities and the Army, requisitioned products from bourgeoisie, and sent its emissaries and agitators into provinces. One of its most important functions was the security of revolutionary order, and the fight against counterrevolutionary activity (see: Anti-Soviet agitation). On December 1, 1917, the All-Russian Central Executive Committee (VTsIK or TsIK) reviewed a proposed reorganization of the VRK, and possible replacement of it. On December 5, the Petrograd VRK published an announcement of dissolution and transferred its functions to the department of TsIK for the fight against "counterrevolutionaries". On December 6, the Council of People's Commissars (Sovnarkom) strategized how to persuade government workers to strike against counter-revolution across Russia. They decided that a special commission was needed to implement the "most energetically revolutionary" measures. Felix Dzerzhinsky (the Iron Felix) was appointed as Director and invited the participation of the following individuals: V. K. Averin, I. K. Ksenofontov, S. K. Ordzhonikidze, Ya. Kh. Peters, K. A. Peterson, V. A. Trifonov, I. S. Unshlikht, V. N. Vasilevsky, V. N. Yakovleva, V. V. Yakovlev, D. G. Yevseyev, N. A. Zhydelev. On December 7, 1917, all of those invited except Zhydelev and Vasilevsky gathered in the Smolny Institute with Dzerzhinsky to discuss the competence and structure of the commission to combat counterrevolution and sabotage. The obligations of the commission were: "to liquidate to the root all of the counterrevolutionary and sabotage activities and all attempts to them in all of Russia, to hand over counter-revolutionaries and saboteurs to the revolutionary tribunals, develop measures to combat them and relentlessly apply them in real-world applications. The commission should only conduct a preliminary investigation". The commission should also observe the press and counterrevolutionary parties, sabotaging officials and other criminals. Three sections were created: informational, organizational, and a unit to combat counter-revolution and sabotage. Upon the end of the meeting, Dzerzhinsky reported to the Sovnarkom with the requested information. The commission was allowed to apply such measures of repression as 'confiscation, deprivation of ration cards, publication of lists of enemies of the people etc.'". That day, Sovnarkom officially confirmed the creation of VCheKa. The commission was created not under the VTsIK as was previously anticipated, but rather under the Council of the People's Commissars. On December 8, 1917, some of the original members of the Cheka were replaced. Averin, Ordzhonikidze, and Trifonov were replaced by V. V. Fomin, S. E. Shchukin, Ilyin, and Chernov. On the meeting of December 8, the presidium of VChK was elected of five members, and chaired by Dzerzhinsky. The issues of "speculation" or profiteering, such as by black market grain sellers and "corruption" was raised at the same meeting, which was assigned to Peters to address and report with results to one of the next meetings of the commission. A circular, published on , gave the address of VCheka's first headquarters as "Petrograd, Gorokhovaya 2, 4th floor". On December 11, Fomin was ordered to organize a section to suppress "speculation." And in the same day, VCheKa offered Shchukin to conduct arrests of counterfeiters. In January 1918, a subsection of the anti-counterrevolutionary effort was created to police bank officials. The structure of VCheKa was changing repeatedly. By March 1918, when the organization came to Moscow, it contained the following sections: against counterrevolution, speculation, non-residents, and information gathering. By the end of 1918–1919, some new units were created: secretly operative, investigatory, of transportation, military (special), operative, and instructional. By 1921, it changed once again, forming the following sections: directory of affairs, administrative-organizational, secretly operative, economical, and foreign affairs. First months. In the first months of its existence, VCheKa consisted of only 40 officials. It commanded a team of soldiers, the Sveaborgesky regiment, as well as a group of Red Guardsmen. On January 14, 1918, Sovnarkom ordered Dzerzhinsky to organize teams of "energetic and ideological" sailors to combat speculation. By the spring of 1918, the commission had several teams: in addition to the Sveaborge team, it had an intelligence team, a team of sailors, and a strike team. Through the winter of 1917–1918, all activities of VCheKa were centralized mainly in the city of Petrograd. It was one of several other commissions in the country which fought against counterrevolution, speculation, banditry, and other activities perceived as crimes. Other organizations included: the Bureau of Military Commissars, and an Army-Navy investigatory commission to attack the counterrevolutionary element in the Red Army, plus the Central Requisite and Unloading Commission to fight speculation. The investigation of counterrevolutionary or major criminal offenses was conducted by the Investigatory Commission of Revtribunal. The functions of VCheKa were closely intertwined with the Commission of V. D. Bonch-Bruyevich, which beside the fight against wine pogroms was engaged in the investigation of most major political offenses (see: Bonch-Bruyevich Commission). All results of its activities, VCheKa had either to transfer to the Investigatory Commission of Revtribunal, or to dismiss. The control of the commission's activity was provided by the People's Commissariat for Justice (Narkomjust, at that time headed by Isaac Steinberg) and Internal Affairs (at that time headed by Grigory Petrovsky). Although the VCheKa was officially an independent organization from Internal Affairs, its chief members such as Dzerzhinsky, Latsis, Unszlicht, and Uritsky (all main chekists), since November 1917 composed the collegiate of Internal Affairs headed by Petrovsky. In November 1918, Petrovsky was appointed as head of the All-Ukrainian Central Military Revolutionary Committee during VCheKa's expansion to provinces and front-lines. At the time of political competition between Bolsheviks and SRs (January 1918), Left SRs attempted to curb the rights of VCheKa and establish through the Narkomiust their control over its work. Having failed in attempts to subordinate the VCheKa to Narkomiust, the Left SRs tried to gain control of the Extraordinary Commission in a different way: they requested that the Central Committee of the party be granted the right to directly enter their representatives into the VCheKa. Sovnarkom recognized the desirability of including five representatives of the Left Socialist-Revolutionary faction of VTsIK. Left SRs were granted the post of a companion (deputy) chairman of VCheKa. However, Sovnarkom, in which the majority belonged to the representatives of RSDLP(b) retained the right to approve members of the collegium of the VCheKa. Originally, members of the Cheka were exclusively Bolshevik; however, in January 1918, Left SRs also joined the organization. The Left SRs were expelled or arrested later in 1918, following the attempted assassination of Lenin by an SR, Fanni Kaplan. Consolidation of VCheKa and National Establishment. By the end of January 1918, the Investigatory Commission of Petrograd Soviet (probably same as of Revtribunal) petitioned Sovnarkom to delineate the role of detection and judicial-investigatory organs. It offered to leave, for the VCheKa and the Commission of Bonch-Bruyevich, only the functions of detection and suppression, while investigative functions entirely transferred to it. The Investigatory Commission prevailed. On January 31, 1918, Sovnarkom ordered to relieve VCheKa of the investigative functions, leaving for the commission only the functions of detection, suppression, and prevention of anti revolutionary crimes. At the meeting of the Council of People's Commissars on January 31, 1918, a merger of VCheKa and the Commission of Bonch-Bruyevich was proposed. The existence of both commissions, VCheKa of Sovnarkom and the Commission of Bonch-Bruyevich of VTsIK, with almost the same functions and equal rights, became impractical. A decision followed two weeks later. On February 23, 1918, VCheKa sent a radio telegram to all Soviets with a petition to immediately organize emergency commissions to combat counter-revolution, sabotage and speculation, if such commissions had not been yet organized. February 1918 saw the creation of local Extraordinary Commissions. One of the first founded was the Moscow Cheka. Sections and commissariats to combat counterrevolution were established in other cities. The Extraordinary Commissions arose, usually in the areas during the moments of the greatest aggravation of political situation. On February 25, 1918, as the counterrevolutionary organization "Union of Front-liners" was making advances, the executive committee of the Saratov Soviet formed a counter-revolutionary section. On March 7, 1918, because of the move from Petrograd to Moscow, the Petrograd Cheka was created. On March 9, a section for combating counterrevolution was created under the Omsk Soviet. Extraordinary commissions were also created in Penza, Perm, Novgorod, Cherepovets, Rostov, Taganrog. On March 18, VCheKa adopted a resolution, "The Work of VCheKa on the All-Russian Scale", foreseeing the formation everywhere of Extraordinary Commissions after the same model, and sent a letter that called for the widespread establishment of the Cheka in combating counterrevolution, speculation, and sabotage. Establishment of provincial Extraordinary Commissions was largely completed by August 1918. In the Soviet Republic, there were 38 gubernatorial Chekas (Gubcheks) by this time. On June 12, 1918, the All-Russian Conference of Cheka adopted the "Basic Provisions on the Organization of Extraordinary Commissions". They set out to form Extraordinary Commissions not only at Oblast and Guberniya levels, but also at the large Uyezd Soviets. In August 1918, in the Soviet Republic had accounted for some 75 Uyezd-level Extraordinary Commissions. By the end of the year, 365 Uyezd-level Chekas were established. In 1918, the All-Russia Extraordinary Commission and the Soviets managed to establish a local Cheka apparatus. It included Oblast, Guberniya, Raion, Uyezd, and Volost Chekas, with Raion and Volost Extraordinary Commissioners. In addition, border security Chekas were included in the system of local Cheka bodies. In the autumn of 1918, as consolidation of the political situation of the republic continued, a move toward elimination of Uyezd-, Raion-, and Volost-level Chekas, as well as the institution of Extraordinary Commissions was considered. On January 20, 1919, VTsIK adopted a resolution prepared by VCheKa, "On the abolition of Uyezd Extraordinary Commissions". On January 16 the presidium of VCheKa approved the draft on the establishment of the Politburo at Uyezd militsiya. This decision was approved by the Conference of the Extraordinary Commission IV, held in early February 1920. Other types of Cheka. On August 3, a VCheKa section for combating counterrevolution, speculation and sabotage on railways was created. On August 7, 1918, Sovnarkom adopted a decree on the organization of the railway section at VCheKa. Combating counterrevolution, speculation, and crimes on railroads was passed under the jurisdiction of the railway section of VCheKa and local Cheka. In August 1918, railway sections were formed under the Gubcheks. Formally, they were part of the non-resident sections, but in fact constituted a separate division, largely autonomous in their activities. The gubernatorial and oblast-type Chekas retained in relation to the transportation sections only control and investigative functions. The beginning of a systematic work of organs of VCheKa in RKKA refers to July 1918, the period of extreme tension of the civil war and class struggle in the country. On July 16, 1918, the Council of People's Commissars formed the Extraordinary Commission for combating counterrevolution at the Czechoslovak (Eastern) Front, led by M. I. Latsis. In the fall of 1918, Extraordinary Commissions to combat counterrevolution on the Southern (Ukraine) Front were formed. In late November, the Second All-Russian Conference of the Extraordinary Commissions accepted a decision after a report from I. N. Polukarov to establish at all frontlines, and army sections of the Cheka and granted them the right to appoint their commissioners in military units. On December 9, 1918, the collegiate (or presidium) of VCheKa had decided to form a military section, headed by M. S. Kedrov, to combat counterrevolution in the Army. In early 1919, the military control and the military section of VCheKa were merged into one body, the Special Section of the Republic, with Kedrov as head. On January 1, he issued an order to establish the Special Section. The order instructed agencies everywhere to unite the Military control and the military sections of Chekas and to form special sections of frontlines, armies, military districts, and guberniyas. In November 1920, the Soviet of Labor and Defense created a Special Section of VCheKa for the security of the state border. On February 6, 1922, after the Ninth All-Russian Soviet Congress, the Cheka was dissolved by VTsIK, "with expressions of gratitude for heroic work." It was replaced by the State Political Administration (GPU), a section of Internal Affairs of the Russian Soviet Federative Socialist Republic (RSFSR). Dzerzhinsky remained as chief of the GPU. Operations. Suppression of political opposition. As its name implied, the Extraordinary Commission had virtually unlimited powers and could interpret them in any way it wished. No standard procedures were ever set up, except that the commission was supposed to send the arrested to the Military-Revolutionary tribunals if outside of a war zone. This left an opportunity for a wide range of interpretations, as the whole country was in total chaos. At the direction of Lenin, the Cheka performed mass arrests, imprisonments, and executions of "enemies of the people". In this, the Cheka said that they targeted "class enemies" such as the bourgeoisie, and members of the clergy. Within a month, the Cheka had extended its repression to all political opponents of the communist government, including anarchists and others on the left. On April 11/12, 1918, some 26 anarchist political centres in Moscow were attacked. Forty anarchists were killed by Cheka forces, and about 500 were arrested and jailed after a pitched battle took place between the two groups. In response to the anarchists' resistance, the Cheka orchestrated a massive retaliatory campaign of repression, executions, and arrests against all opponents of the Bolshevik government, in what came to be known as "Red Terror". The "Red Terror", implemented by Dzerzhinsky on September 5, 1918, was vividly described by the Red Army journal "Krasnaya Gazeta": Without mercy, without sparing, we will kill our enemies in scores of hundreds. Let them be thousands, let them drown themselves in their own blood. For the blood of Lenin and Uritsky … let there be floods of blood of the bourgeoisie – more blood, as much as possible..." An early Bolshevik, Victor Serge described in his book "Memoirs of a Revolutionary": The Cheka was also used against Nestor Makhno's Revolutionary Insurgent Army of Ukraine. After the Insurgent Army had served its purpose in aiding the Red Army to stop the Whites under Denikin, the Soviet communist government decided to eliminate the anarchist forces. In May 1919, two Cheka agents sent to assassinate Makhno were caught and executed. Many victims of Cheka repression were "bourgeois hostages" rounded up and held in readiness for summary execution in reprisal for any alleged counter-revolutionary act. Wholesale, indiscriminate arrests became an integral part of the system. The Cheka used trucks disguised as delivery trucks, called "Black Marias", for the secret arrest and transport of prisoners. It was during the Red Terror that the Cheka, hoping to avoid the bloody aftermath of having half-dead victims writhing on the floor, developed a technique for execution known later by the German words ""Nackenschuss" or ""Genickschuss", a shot to the nape of the neck, which caused minimal blood loss and instant death. The victim's head was bent forward, and the executioner fired slightly downward at point-blank range. This had become the standard method used later by the NKVD to liquidate Joseph Stalin's purge victims and others. Persecution of deserters. It is believed that there were more than three million deserters from the Red Army in 1919 and 1920 . Approximately 500,000 deserters were arrested in 1919 and close to 800,000 in 1920, by troops of the 'Special Punitive Department' of the Cheka, created to punish desertions. These troops were used to forcibly repatriate deserters, taking and shooting hostages to force compliance or to set an example. In September 1918, according to "The Black Book of Communism", in only twelve provinces of Russia, 48,735 deserters and 7,325 "bandits" were arrested, 1,826 were killed and 2,230 were executed. The exact identity of these individuals is confused by the fact that the Soviet Bolshevik government used the term 'bandit' to cover ordinary criminals as well as armed and unarmed political opponents, such as the anarchists. Repression. Number of victims. Estimates on Cheka executions vary widely. The lowest figures ("disputed below") are provided by Dzerzhinsky's lieutenant Martyn Latsis, limited to RSFSR over the period 1918–1920: In 1918: 6,300; in 1919 (up to July): 2,089; Total: 8,389 In 1918: 6,185; in 1919: 3,456; Total: 9,641 In January–June 1918: 22; in July–December 1918: more than 6,000; in 1918–20: 12,733. Experts generally agree these semi-official figures are vastly understated. Pioneering historian of the Red Terror Sergei Melgunov claims that this was done deliberately in an attempt to demonstrate the government's humanity. For example, he refutes the claim made by Latsis that only 22 executions were carried out in the first six months of the Cheka's existence by providing evidence that the true number was 884 executions. W. H. Chamberlin claims, "It is simply impossible to believe that the Cheka only put to death 12,733 people in all of Russia up to the end of the civil war." Donald Rayfield concurs, noting that, "Plausible evidence reveals that the actual numbers … vastly exceeded the official figures." Chamberlin provides the "reasonable and probably moderate" estimate of 50,000, while others provide estimates ranging up to 500,000. Several scholars put the number of executions at about 250,000. Some believe it is possible more people were murdered by the Cheka than died in battle. Historian James Ryan gives a modest estimate of 28,000 executions per year from December 1917 to February 1922. Lenin himself seemed unfazed by the killings. On 12 January 1920, while addressing trade union leaders, he said: "We did not hesitate to shoot thousands of people, and we shall not hesitate, and we shall save the On 14 May 1921, the Politburo, chaired by Lenin, passed a motion "broadening the rights of the [Cheka] in relation to the use of the [death penalty]." Scholarly estimates. There is no consensus among the Western historians on the number of deaths from the Red Terror. One source gives estimates of 28,000 executions per year from December 1917 to February 1922. Estimates for the number of people shot during the initial period of the Red Terror are at least 10,000. Estimates for the whole period go for a low of 50,000 to highs of 140,000 and 200,000 executed. Most estimations for the number of executions in total put the number at about 100,000. According to Vadim Erlikhman's investigation, the number of the Red Terror's victims is at least 1,200,000 people. According to Robert Conquest, a total of 140,000 people were shot in 1917–1922. Candidate of Historical Sciences Nikolay Zayats states that the number of people shot by the Cheka in 1918–1922 is about 37,300 people, shot in 1918–1921 by the verdicts of the tribunals – 14,200, i.e. about 50,000–55,000 people in total, although executions and atrocities were not limited to the Cheka, having been organized by the Red Army as well. According to anti-Bolshevik Socialist Revolutionary Sergei Melgunov (1879–1956), at the end of 1919, the Special Investigation Commission to investigate the atrocities of the Bolsheviks estimated the number of deaths at 1,766,188 people in 1918–1919 only. Atrocities. The Cheka engaged in the widespread practice of torture. Depending on Cheka committees in various cities, the methods included: being skinned alive, scalped, "crowned" with barbed wire, impaled, crucified, hanged, stoned to death, tied to planks and pushed slowly into furnaces or tanks of boiling water, or rolled around naked in internally nail-studded barrels. Chekists reportedly poured water on naked prisoners in the winter-bound streets until they became living ice statues. Others beheaded their victims by twisting their necks until their heads could be torn off. The Cheka detachments stationed in Kiev would attach an iron tube to the torso of a bound victim and insert a rat in the tube closed off with wire netting, while the tube was held over a flame until the rat began gnawing through the victim's guts in an effort to escape. Women and children were also victims of Cheka terror. Women would sometimes be tortured and raped before being shot. Children between the ages of 8 and 13 were imprisoned and occasionally executed. All of these atrocities were published on numerous occasions in "Pravda" and "Izvestiya": January 26, 1919 "Izvestiya" #18 article "Is it really a medieval imprisonment?" («Неужели средневековый застенок?»); February 22, 1919 "Pravda" #12 publishes details of the Vladimir Cheka's tortures, September 21, 1922 "Socialist Herald" publishes details of series of tortures conducted by the Stavropol Cheka (hot basement, cold basement, skull measuring, etc.). The Chekists were also supplemented by the militarized Units of Special Purpose (the Party's Spetsnaz or ). Cheka was actively and openly utilizing kidnapping methods. With kidnapping methods, Cheka was able to extinguish numerous cases of discontent especially among the rural population. Among the notorious ones was the Tambov rebellion. Villages were bombarded to complete annihilation, as in the case of Tretyaki, Novokhopersk uyezd, Voronezh Governorate. As a result of this relentless violence, more than a few Chekists ended up with psychopathic disorders, which Nikolai Bukharin said were "an occupational hazard of the Chekist profession." Many hardened themselves to the executions by heavy drinking and drug use. Some developed a gangster-like slang for the verb to kill in an attempt to distance themselves from the killings, such as 'shooting partridges', or 'sealing' a victim, or giving him a "natsokal" (onomatopoeia of the trigger action). On November 30, 1992, by the initiative of the President of the Russian Federation the Constitutional Court of the Russian Federation recognized the Red Terror as unlawful, which in turn led to the suspension of Communist Party of the RSFSR. Regional Chekas. Cheka departments were organized not only in big cities and guberniya seats, but also in each uyezd, at any front-lines and military formations. Nothing is known on what resources they were created. Legacy. Konstantin Preobrazhenskiy criticised the continuing celebration of the professional holiday of the old and the modern Russian security services on the anniversary of the creation of the Cheka, , with the assent of the Presidents of Russia. (Vladimir Putin, former KGB officer, chose not to change the date to another): "The successors of the KGB still haven't renounced anything; they even celebrate their professional holiday the same day, as during repression, on the 20th of December. It is as if the present intelligence and counterespionage services of Germany celebrated Gestapo Day. I can imagine how indignant our press would be!"
6753
7903804
https://en.wikipedia.org/wiki?curid=6753
Clitic
In morphology and syntax, a clitic ( , backformed from Greek "leaning" or "enclitic") is a morpheme that has syntactic characteristics of a word, but depends phonologically on another word or phrase. In this sense, it is syntactically independent but phonologically dependent—always attached to a host. A clitic is pronounced like an affix, but plays a syntactic role at the phrase level. In other words, clitics have the "form" of affixes, but the distribution of function words. Clitics can belong to any grammatical category, although they are commonly pronouns, determiners, or adpositions. Note that orthography is not always a good guide for distinguishing clitics from affixes: clitics may be written as separate words, but sometimes they are joined to the word they depend on (like the Latin clitic , meaning "and") or separated by special characters such as hyphens or apostrophes (like the English clitic "s" in "it's" for "it has" or "it is"). Classification. Clitics fall into various categories depending on their position in relation to the word they connect to. Proclitic. A proclitic appears before its host. Enclitic. An enclitic appears after its host. Endoclitic. Some authors postulate endoclitics, which split a stem and are inserted between the two elements. For example, they have been claimed to occur between the elements of bipartite verbs (equivalent to English verbs such as "take part") in the Udi language. Endoclitics have also been claimed for Pashto and Degema. However, other authors treat such forms as a sequence of clitics docked to the stem. Mesoclitic. A "mesoclitic" is a type of clitic that occurs between the stem of a verb and its affixes. Mesoclisis is rare outside of formal standard Portuguese, where it is predominantly found. In Portuguese, mesoclitic constructions are typically formed with the infinitive form of the verb, a clitic pronoun, and a lexicalized tense affix. For example, in the sentence "conquistar-se-á" ("it will be conquered"), the reflexive pronoun "se" appears between the stem "conquistar" and the future tense affix "á". This placement of the clitic is characteristic of mesoclisis. Other examples include "dá-lo-ei" ("I will give it") and "matá-la-ia" ("he/she/it would kill her"). These forms are typically found much more frequently in written Portuguese than in spoken varieties. Additionally, it is possible to use two clitics within a verb, as in "dar-no-lo-á" ("he/she/it will give it to us") and "dar-ta-ei" ("ta" = "te" + "a", "I will give it/her to you"). This phenomenon is possible due to the historical evolution of the Portuguese synthetic future tense, which comes from the fusion of the infinitive form of the verb and the finite forms of the auxiliary verb "haver" (from Latin "habēre"). This origin explains why the clitic can appear between the verb stem and its tense marker, as the future tense was originally a separate word. Colloquial Turkish exhibits an instance of a mesoclitic where the conjunction enclitic "de" ("also, as well") is inserted after the gerundive suffix "-e" connecting the verb stem to the potential suffix "-(e)bilmek", effectively rendering it in its original auxiliary verb form "bilmek" (to know). Suffixed auxiliary verbs cannot be converted into individual verbs in Standard Turkish, and the gerundive suffix is considered an inseparable part of them. Distinction. One distinction drawn by some scholars divides the broad term "clitics" into two categories, simple clitics and special clitics. This distinction is, however, disputed. Simple clitics. Simple clitics are free morphemes: can stand alone in a phrase or sentence. They are unaccented and thus phonologically dependent upon a nearby word. They derive meaning only from that "host". Special clitics. Special clitics are morphemes that are bound to the word upon which they depend: they exist as a part of their host. That form, which is unaccented, represents a variant of a free form that carries stress. Both variants carry similar meaning and phonological makeup, but the special clitic is bound to a host word and is unaccented. Properties. Some clitics can be understood as elements undergoing a historical process of grammaticalization: lexical item → clitic → affix According to this model from Judith Klavans, an autonomous lexical item in a particular context loses the properties of a fully independent word over time and acquires the properties of a morphological affix (prefix, suffix, infix, etc.). At any intermediate stage of this evolutionary process, the element in question can be described as a "clitic". As a result, this term ends up being applied to a highly heterogeneous class of elements, presenting different combinations of word-like and affix-like properties. Comparison with affixes. Although the term "clitic" can be used descriptively to refer to any element whose grammatical status is somewhere in between a typical word and a typical affix, linguists have proposed various definitions of "clitic" as a technical term. One common approach is to treat clitics as words that are prosodically deficient: that, like affixes, they cannot appear without a host, and can only form an accentual unit in combination with their host. The term "postlexical clitic" is sometimes used for this sense of the term. Given this basic definition, further criteria are needed to establish a dividing line between clitics and affixes. There is no natural, clear-cut boundary between the two categories (since from a diachronic point of view, a given form can move gradually from one to the other by morphologization). However, by identifying clusters of observable properties that are associated with core examples of clitics on the one hand, and core examples of affixes on the other, one can pick out a battery of tests that provide an empirical foundation for a clitic-affix distinction. An affix syntactically and phonologically attaches to a base morpheme of a limited part of speech, such as a verb, to form a new word. A clitic syntactically functions above the word level, on the phrase or clause level, and attaches only phonetically to the first, last, or only word in the phrase or clause, whichever part of speech the word belongs to. The results of applying these criteria sometimes reveal that elements that have traditionally been called "clitics" actually have the status of affixes (e.g., the Romance pronominal clitics discussed below). Zwicky and Pullum postulated five characteristics that distinguish clitics from affixes: An example of differing analyses by different linguists is the discussion of the possessive marker ('s) in English. Some linguists treat it as an affix, while others treat it as a clitic. Comparison with words. Similar to the discussion above, clitics must be distinguishable from words. Linguists have proposed a number of tests to differentiate between the two categories. Some tests, specifically, are based upon the understanding that when comparing the two, clitics resemble affixes, while words resemble syntactic phrases. Clitics and words resemble different categories, in the sense that they share certain properties. Six such tests are described below. These are not the only ways to differentiate between words and clitics. Word order. Clitics do not always appear next to the word or phrase that they are associated with grammatically. They may be subject to global word order constraints that act on the entire sentence. Many Indo-European languages, for example, obey Wackernagel's law (named after Jacob Wackernagel), which requires sentential clitics to appear in "second position", after the first syntactic phrase or the first stressed word in a clause: Indo-European languages. Germanic languages. English. English enclitics include the contracted versions of auxiliary verbs, as in "I'm" and "we've". Some also regard the possessive marker, as in "The King of England's crown" as an enclitic, rather than a (phrasal) genitival inflection. Some consider the infinitive marker "to" and the English articles "a, an, the" to be proclitics. The negative marker "-n't" as in "couldn't" etc. is typically considered a clitic that developed from the lexical item "not". Linguists Arnold Zwicky and Geoffrey Pullum argue, however, that the form has the properties of an affix rather than a syntactically independent clitic. Celtic languages. In Cornish, the clitics "ma" "/" "na" are used after a noun and definite article to express "this" / "that" (singular) and "these" / "those" (plural). For example: Irish Gaelic uses "seo" "/" "sin" as clitics in a similar way, also to express "this" / "that" and "these" / "those". For example: Romance languages. In Romance languages, some have treated the object personal pronoun forms as clitics, though they only attach to the verb they are the object of and so are affixes by the definition used here. There is no general agreement on the issue. For the Spanish object pronouns, for example: Portuguese allows object suffixes before the conditional and future suffixes of the verbs: Colloquial Portuguese allows ser to be conjugated as a verbal clitic adverbial adjunct to emphasize the importance of the phrase compared to its context, or with the meaning of "really" or "in truth": Note that this clitic form is only for the verb ser and is restricted to only third-person singular conjugations. It is not used as a verb in the grammar of the sentence but introduces prepositional phrases and adds emphasis. It does not need to concord with the tense of the main verb, as in the second example, and can be usually removed from the sentence without affecting the simple meaning. Proto-Indo-European. In the Indo-European languages, some clitics can be traced back to Proto-Indo-European: for example, is the original form of Sanskrit "च" ("-ca"), Greek "τε" ("-te"), and Latin "-que". Slavic languages. Serbo-Croatian. Serbo-Croatian: the reflexive pronoun forms "si" and "se", "li" (yes–no question), unstressed present and aorist tense forms of "biti" ("to be"; "sam, si, je, smo, ste, su"; and "bih, bi, bi, bismo, biste, bi", for the respective tense), unstressed personal pronouns in genitive ("me, te, ga, je, nas, vas, ih"), dative ("mi, ti, mu, joj, nam, vam, im") and accusative ("me, te, ga (nj), je (ju), nas, vas, ih"), and unstressed present tense of "htjeti" ("want/will"; "ću, ćeš, će, ćemo, ćete, će") These clitics follow the first stressed word in the sentence or clause in most cases, which may have been inherited from Proto-Indo-European (see Wackernagel's Law), even though many of the modern clitics became cliticised much more recently in the language (e.g. auxiliary verbs or the accusative forms of pronouns). In subordinate clauses and questions, they follow the connector and/or the question word respectively. Examples (clitics – "sam" "I am", "biste" "you would (pl.)", "mi" "to me", "vam" "to you (pl.)", "ih" "them"): In certain rural dialects this rule is (or was until recently) very strict, whereas elsewhere various exceptions occur. These include phrases containing conjunctions (e. g. "Ivan i Ana" "Ivan and Ana"), nouns with a genitival attribute (e. g. "vrh brda" "the top of the hill"), proper names and titles and the like (e. g. "(gospođa) Ivana Marić" "(Mrs) Ivana Marić", "grad Zagreb" "the city (of) Zagreb"), and in many local varieties clitics are hardly ever inserted into any phrases (e. g. "moj najbolji prijatelj" "my best friend", "sutra ujutro" "tomorrow morning"). In cases like these, clitics normally follow the initial phrase, although some Standard grammar handbooks recommend that they should be placed immediately after the verb (many native speakers find this unnatural). Examples: Clitics are however never inserted after the negative particle "ne", which always precedes the verb in Serbo-Croatian, or after prefixes (earlier preverbs), and the interrogative particle "li" always immediately follows the verb. Colloquial interrogative particles such as "da li", "dal", "jel" appear in sentence-initial position and are followed by clitics (if there are any). Examples:
6759
17350134
https://en.wikipedia.org/wiki?curid=6759
Context-free grammar
In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form formula_1 with formula_2 a "single" nonterminal symbol, and formula_3 a string of terminals and/or nonterminals (formula_3 can be empty). Regardless of which symbols surround it, the single nonterminal formula_2 on the left hand side can always be replaced by formula_3 on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form formula_7 with formula_2 a nonterminal symbol and formula_3, formula_10, and formula_11 strings of terminal and/or nonterminal symbols. A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture, formula_12 replaces formula_13 with formula_14. There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol ("start symbol"). Nonterminal symbols are used during the derivation process, but do not appear in its final result string. Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable. Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition. In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF. Background. Since at least the time of the ancient Indian scholar Pāṇini, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence: John, whose blue car was in the garage, walked to the grocery store. can be logically parenthesized (with the logical metasymbols [ ]) as follows: [John[, [whose [blue car]] [was [in [the garage]]],]] [walked [to [the [grocery store]]]]. A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly. Context-free grammars are a special form of semi-Thue systems that in their general form date back to the work of Axel Thue. The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky, and also their classification as a special type of formal grammar (which he called phrase-structure grammars). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules. Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee. The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language. Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars. Formal definitions. A context-free grammar is defined by the 4-tuple formula_15, where Production rule notation. A production rule in is formalized mathematically as a pair formula_18, where formula_19 is a nonterminal and formula_20 is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with formula_3 as its left hand side and as its right hand side: formula_22. It is allowed for to be the empty string, and in this case it is customary to denote it by . The form formula_23 is called an ε-production. It is common to list all right-hand sides for the same left-hand side on the same line, using | (the vertical bar) to separate them. Rules formula_24 and formula_25 can hence be written as formula_26. In this case, formula_27 and formula_28 are called the first and second alternative, respectively. Rule application. For any strings formula_29, we say directly yields , written as formula_30, if formula_31 with formula_19 and formula_33 such that formula_34 and formula_35. Thus, is a result of applying the rule formula_36 to . Repetitive rule application. For any strings formula_37 we say "yields" or is "derived" from if there is a positive integer and strings formula_38 such that formula_39. This relation is denoted formula_40, or formula_41 in some textbooks. If formula_42, the relation formula_43 holds. In other words, formula_44 and formula_45 are the reflexive transitive closure (allowing a string to yield itself) and the transitive closure (requiring at least one step) of formula_46, respectively. Context-free language. The language of a grammar formula_15 is the set formula_48 of all terminal-symbol strings derivable from the start symbol. A language is said to be a context-free language (CFL), if there exists a CFG , such that formula_49. Non-deterministic pushdown automata recognize exactly the context-free languages. Examples. Words concatenated with their reverse. The grammar formula_50, with productions , is context-free. It is not proper since it includes an -production. A typical derivation in this grammar is This makes it clear that formula_51. The language is context-free; however, it can be proved that it is not regular. If the productions , are added, a context-free grammar for the set of all palindromes over the alphabet is obtained. Well-formed parentheses. The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols and and one nonterminal symbol . The production rules are , The first rule allows the symbol to multiply; the second rule allows the symbol to become enclosed by matching parentheses; and the third rule terminates the recursion. Well-formed nested parentheses and square brackets. A second canonical example is two different kinds of matching nested parentheses, described by the productions: with terminal symbols , , , and nonterminal . The following sequence can be derived in that grammar: Matching pairs. In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example: This grammar generates the language , which is not regular (according to the pumping lemma for regular languages). The special character stands for the empty string. By changing the above grammar to we obtain a grammar generating the language instead. This differs only in that it contains the empty string while the original grammar did not. Distinct number of as and bs. A context-free grammar for the language consisting of all strings over containing an unequal number of s and s: Here, the nonterminal can generate all strings with more as than s, the nonterminal generates all strings with more s than s and the nonterminal generates all strings with an equal number of s and s. Omitting the third alternative in the rules for and does not restrict the grammar's language. Second block of bs of double size. Another example of a non-regular language is formula_52. It is context-free as it can be generated by the following context-free grammar: First-order logic formulas. The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol. Examples of languages that are not context free. In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced "disregarding the other", where the two types need not nest inside one another, for example: or The fact that this language is not context free can be proven using pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form formula_53 should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form . Regular grammars. Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular. The terminals here are and , while the only nonterminal is . The language described is all nonempty strings of s and s that end in . This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language. Using vertical bars, the grammar above can be described more tersely as follows: Derivations and syntax trees. A "derivation" of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language. A derivation is fully determined by giving, for each step: For clarity, the intermediate string is usually given as well. For instance, with the grammar: the string can be derived from the start symbol with the following derivation: (by rule 1. on ) (by rule 1. on the second ) (by rule 2. on the first ) (by rule 3. on the second ) (by rule 2. on ) Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite: Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), which can be summarized as rule 1 rule 2 rule 1 rule 2 rule 3. One rightmost derivation is: (by rule 1 on the rightmost ) (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which can be summarized as rule 1 rule 1 rule 3 rule 2 rule 2. The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "" is derived according to the leftmost derivation outlined above, the structure of the string would be: where denotes a substring recognized as belonging to . This hierarchy can also be seen as a tree: This tree is called a "parse tree" or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 1 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which defines a string with a different structure and a different parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: (by rule 1 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an "ambiguous grammar". Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called "inherently ambiguous languages". Normal forms. Every context-free grammar with no -production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm). Closure properties. Context-free languages are closed under the various operations, that is, if the languages and are context-free, so is the result of the following operations: They are not closed under general intersection (hence neither under complementation) and set difference. Decidable problems. The following are some decidable problems about context-free grammars. Parsing. The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to Boolean matrix multiplication, thus inheriting its complexity upper bound of "O"("n"2.3728639). Conversely, Lillian Lee has shown "O"("n"3−"ε") Boolean matrix multiplication to be reducible to "O"("n"3−3"ε") CFG parsing, thus establishing some kind of lower bound for the latter. Reachability, productiveness, nullability. A nonterminal symbol formula_54 is called "productive", or "generating", if there is a derivation formula_55 for some string formula_56 of terminal symbols. formula_54 is called "reachable" if there is a derivation formula_58 for some strings formula_59 of nonterminal and terminal symbols from the start symbol. formula_54 is called "useless" if it is unreachable or unproductive. formula_54 is called "nullable" if there is a derivation formula_62. A rule formula_63 is called an "ε-production". A derivation formula_64 is called a "cycle". Algorithms are known to eliminate from a given grammar, without changing its generated language, In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are called "useless". In the depicted example grammar, the nonterminal is unreachable, and is unproductive, while causes a cycle. Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives "" from the right-hand side of the rule for . A context-free grammar is said to be "proper" if it has neither useless symbols nor -productions nor cycles. Combining the above algorithms, every context-free grammar not generating can be transformed into a weakly equivalent proper one. Regularity and LL("k") checks. It is decidable whether a given "grammar" is a regular grammar, as well as whether it is an LL("k") grammar for a given . If "k" is not given, the latter problem is undecidable. Given a context-free grammar, it is not decidable whether its language is regular, nor whether it is an LL("k") language for a given "k". Emptiness and finiteness. There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite. Undecidable problems. Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars. However, many problems are undecidable even for context-free grammars; the most prominent ones are handled in the following. Universality. Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules? A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a "computation history", a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input. Language equality. Given two CFGs, do they generate the same language? The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings. Language inclusion. Given two CFGs, can the first one generate all strings that the second one can generate? If this problem was decidable, then language equality could be decided too: two CFGs formula_65 and formula_66 generate the same language if formula_67 is a subset of formula_68 and formula_68 is a subset of formula_67. Being in a lower or higher level of the Chomsky hierarchy. Using Greibach's theorem, it can be shown that the two following problems are undecidable: Grammar ambiguity. Given a CFG, is it ambiguous? The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. This may be proved by Ogden's lemma. Language disjointness. Given two CFGs, is there any string derivable from both grammars? If this problem was decidable, the undecidable Post correspondence problem (PCP) could be decided, too: given strings formula_71 over some alphabet formula_72, let the grammar consist of the rule formula_73; where formula_74 denotes the reversed string formula_75 and formula_76 does not occur among the formula_77; and let grammar consist of the rule formula_78; Then the PCP instance given by formula_71 has a solution if and only if and share a derivable string. The left of the string (before the formula_80) will represent the top of the solution for the PCP instance while the right side will be the bottom in reverse. Extensions. An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics. An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages. Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars. Subclasses. There are a number of important subclasses of the context-free grammars: LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day. Linguistic applications. Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules. Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion). Chomsky's general position regarding the non-context-freeness of natural language has held up since then, although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved. Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German and reduplication in Bambara), the vast majority of forms in natural language are indeed context-free.
6760
32586582
https://en.wikipedia.org/wiki?curid=6760
Cryonics
Cryonics (from "kryos", meaning "cold") is the low-temperature freezing (usually at ) and storage of human remains in the hope that resurrection may be possible in the future. Cryonics is regarded with skepticism by the mainstream scientific community. It is generally viewed as a pseudoscience, and its practice has been characterized as quackery. Cryonics procedures can begin only after the "patients" are clinically and legally dead. Procedures may begin within minutes of death, and use cryoprotectants to try to prevent ice formation during cryopreservation. It is not possible to reanimate a corpse that has undergone vitrification, as this damages the brain, including its neural circuits. The first corpse to be frozen was that of James Bedford, in 1967. As of 2014, remains from about 250 bodies had been cryopreserved in the United States, and 1,500 people had made arrangements for cryopreservation of theirs. Even if the resurrection promised by cryonics were possible, economic considerations make it unlikely cryonics corporations could remain in business long enough to deliver. The "patients", being dead, cannot continue to pay for their own preservation. Early attempts at cryonic preservation were made in the 1960s and early 1970s; most relied on family members to pay for the preservation and ended in failure, with all but one of the corpses cryopreserved before 1973 being thawed and disposed of. Conceptual basis. Cryonicists argue that as long as brain structure remains intact, there is no fundamental barrier, given our current understanding of physics, to recovering its information content. Cryonics proponents go further than the mainstream consensus in saying that the brain does not have to be continuously active to survive or retain memory. Cryonicists controversially say that a human can survive even within an inactive, badly damaged brain, as long as the original encoding of memory and personality can be adequately inferred and reconstituted from what remains. Cryonics uses temperatures below −130 °C, called cryopreservation, in an attempt to preserve enough brain information to permit the revival of the cryopreserved person. Cryopreservation is accomplished by freezing with or without cryoprotectant to reduce ice damage, or by vitrification to avoid ice damage. Even using the best methods, cryopreservation of whole bodies or brains is very damaging and irreversible with current technology. Cryonicists call the human remains packed into low-temperature vats "patients". They hope that some kind of presently nonexistent nanotechnology will be able to bring the dead back to life and treat the diseases that killed them. Mind uploading has also been proposed. Cryonics in practice. Cryonics is expensive. , the cost of preparing and storing corpses using cryonics ranged from US$28,000 to $200,000. At high concentrations, cryoprotectants can stop ice formation completely. Cooling and solidification without crystal formation is called vitrification. In the late 1990s, cryobiologists Gregory Fahy and Brian Wowk developed the first cryoprotectant solutions that could vitrify at very slow cooling rates while still allowing whole organ survival, for the purpose of banking transplantable organs. This has allowed animal brains to be vitrified, thawed, and examined for ice damage using light and electron microscopy. No ice crystal damage was found; cellular damage was due to dehydration and toxicity of the cryoprotectant solutions. Costs can include payment for medical personnel to be on call for death, vitrification, transportation in dry ice to a preservation facility, and payment into a trust fund intended to cover indefinite storage in liquid nitrogen and future revival costs. As of 2011, U.S. cryopreservation costs can range from $28,000 to $200,000, and are often financed via life insurance. KrioRus, which stores bodies communally in large dewars, charges $12,000 to $36,000 for the procedure. Some customers opt to have only their brain cryopreserved ("neuropreservation"), rather than their whole body. As of 2014, about 250 corpses have been cryogenically preserved in the U.S., and around 1,500 people have signed up to have their remains preserved. As of 2016, there are four facilities that retain cryopreserved bodies, three in the U.S. and one in Russia. A more recent development is Tomorrow Biostasis GmbH, a Berlin-based firm offering cryonics and standby and transportation services in Europe. Founded in 2019 by Emil Kendziorra and Fernando Azevedo Pinheiro, it partners with the European Biostasis Foundation in Switzerland for long-term corpse storage. The facility was completed in 2022. It seems extremely unlikely that any cryonics company could exist long enough to take advantage of the supposed benefits offered; historically, even the most robust corporations have only a one-in-a-thousand chance of lasting 100 years. Many cryonics companies have failed; , all but one of the pre-1973 batch had gone out of business, and their stored corpses have been defrosted and disposed of. Obstacles to success. Preservation damage. Medical laboratories have long used cryopreservation to maintain animal cells, human embryos, and even some organized tissues, for periods as long as three decades, but recovering large animals and organs from a frozen state is not considered possible now. Large vitrified organs tend to develop fractures during cooling, a problem worsened by the large tissue masses and very low temperatures of cryonics. Without cryoprotectants, cell shrinkage and high salt concentrations during freezing usually prevent frozen cells from functioning again after thawing. Ice crystals can also disrupt connections between cells that are necessary for organs to function. Some cryonics organizations use vitrification without a chemical fixation step, sacrificing some structural preservation quality for less damage at the molecular level. Some scientists, like João Pedro Magalhães, have questioned whether using a deadly chemical for fixation eliminates the possibility of biological revival, making chemical fixation unsuitable for cryonics. Outside of cryonics firms and cryonics-linked interest groups, many scientists are very skeptical about cryonics methods. Cryobiologist Dayong Gao has said, "we simply don't know if [subjects have] been damaged to the point where they've 'died' during vitrification because the subjects are now inside liquid nitrogen canisters." Based on experience with organ transplants, biochemist Ken Storey argues that "even if you only wanted to preserve the brain, it has dozens of different areas which would need to be cryopreserved using different protocols". Revival. Revival would require repairing damage from lack of oxygen, cryoprotectant toxicity, thermal stress (fracturing), and freezing in tissues that do not successfully vitrify, followed by reversing the cause of death. In many cases, extensive tissue regeneration would be necessary. This revival technology remains speculative. Legal issues. Historically, people had little control over how their bodies were treated after death, as religion held jurisdiction over the matter. But secular courts began to exercise jurisdiction over corpses and use discretion in carrying out deceased people's wishes. Most countries legally treat preserved bodies as deceased persons because of laws that forbid vitrifying someone who is medically alive. In France, cryonics is not considered a legal mode of body disposal; only burial, cremation, and formal body donation to science are allowed, though bodies may legally be shipped to other countries for cryonic freezing. As of 2015, British Columbia prohibits the sale of arrangements for cryonic body preservation. In Russia, cryonics falls outside both the medical industry and the funeral services industry, making it easier than in the U.S. to get hospitals and morgues to release cryonics candidates. In 2016, the English High Court ruled in favor of a mother's right to seek cryopreservation of her terminally ill 14-year-old daughter, as the girl wanted, contrary to the father's wishes. The decision was made on the basis that the case represented a conventional dispute over the disposal of the girl's body, although the judge urged ministers to seek "proper regulation" for the future of cryonic preservation after the hospital raised concerns about the competence and professionalism of the team that conducted the preservation procedures. In "Alcor Life Extension Foundation v. Richardson", the Iowa Court of Appeals ordered the disinterment of Richardson, who was buried against his wishes, for cryopreservation. A detailed legal examination by Jochen Taupitz concludes that cryonic storage is legal in Germany for an indefinite period. Ethics. Writing in "Bioethics" in 2009, David Shaw examined cryonics. The arguments he cited against it included changing the concept of death, the expense of preservation and revival, lack of scientific advancement to permit revival, temptation to use premature euthanasia, and failure due to catastrophe. Arguments in favor of cryonics include the potential benefit to society, the prospect of immortality, and the benefits associated with avoiding death. Shaw explores the expense and the potential payoff, and applies an adapted version of Pascal's Wager to the question. He argues that someone who bets on cryonic preservation risks losing "a bit of money" but potentially gains a longer life and perhaps immortality. Shaun Pattinson responds that Shaw's calculation is incomplete because "being revived only equates to winning the wager if the revived life is worth living. A longer life of unremitting suffering, perhaps due to irreparable nerve damage or even the actions of an evil reviver, is unlikely to be considered preferable to non-revival". In 2016, Charles Tandy wrote in support of cryonics, arguing that honoring someone's last wishes is seen as a benevolent duty in American and many other cultures. History. Cryopreservation was applied to human cells beginning in 1954 with frozen sperm, which was thawed and used to inseminate three women. The freezing of humans was first scientifically proposed by Michigan professor Robert Ettinger in "The Prospect of Immortality" (1962). In 1966, the first human body was frozen—though it had been embalmed for two months—by being placed in liquid nitrogen and stored at just above freezing. The middle-aged woman from Los Angeles, whose name is unknown, was soon thawed and buried by relatives. The first body to be cryopreserved and then frozen in hope of future revival was that of James Bedford. Alcor's Mike Darwin says Bedford's body was cryopreserved around two hours after his death by cardiorespiratory arrest (secondary to metastasized kidney cancer) on January 12, 1967. Bedford's corpse is the only one frozen before 1974 still preserved today. In 1976, Ettinger founded the Cryonics Institute; his corpse was cryopreserved in 2011. In 1981, Robert Nelson, "a former TV repairman with no scientific background" who led the Cryonics Society of California, was sued for allowing nine bodies to thaw and decompose in the 1970s; in his defense, he claimed that the Cryonics Society had run out of money. This lowered the reputation of cryonics in the U.S. In 2018, a Y-Combinator startup called Nectome was recognized for developing a method of preserving brains with chemicals rather than by freezing. The method is fatal, performed as euthanasia under general anesthesia, but the hope is that future technology will allow the brain to be physically scanned into a computer simulation, neuron by neuron. Demographics. According to "The New York Times", cryonicists are predominantly non-religious white men, outnumbering women by about three to one. According to "The Guardian", as of 2008, while most cryonicists used to be young, male, and "geeky", recent demographics have shifted slightly toward whole families. In 2015, Du Hong, a 61-year-old female writer of children's literature, became the first known Chinese national to have her head cryopreserved. Reception. Cryonics is generally regarded as a fringe pseudoscience. Between 1982 and November 2018, the Society for Cryobiology rejected members who practiced cryonics, and issued a public statement saying that cryonics "is an act of speculation or hope, not science", and as such outside the scope of the Society. Russian company KrioRus is the first non-U.S. vendor of cryonics services. Yevgeny Alexandrov, chair of the Russian Academy of Sciences commission against pseudoscience, said there was "no scientific basis" for cryonics, and that the company was based on "unfounded speculation". Scientists have expressed skepticism about cryonics in media sources, and the Norwegian philosopher Ole Martin Moen has written that the topic receives a "minuscule" amount of attention in academia. While some neuroscientists contend that all the subtleties of a human mind are contained in its anatomical structure, few will comment directly on cryonics due to its speculative nature. People who intend to be frozen are often "looked at as a bunch of kooks". Cryobiologist Kenneth B. Storey said in 2004 that cryonics is impossible and will never be possible, as cryonics proponents are proposing to "overturn the laws of physics, chemistry, and molecular science". Neurobiologist Michael Hendricks has said, "Reanimation or simulation is an abjectly false hope that is beyond the promise of technology and is certainly impossible with the frozen, dead tissue offered by the 'cryonics' industry". Anthropologist Simon Dein writes that cryonics is a typical pseudoscience because of its lack of falsifiability and testability. In his view, cryonics is not science, but religion: it places faith in nonexistent technology and promises to overcome death. William T. Jarvis has written, "Cryonics might be a suitable subject for scientific research, but marketing an unproven method to the public is quackery". According to cryonicist Aschwin de Wolf and others, cryonics can often produce intense hostility from spouses who are not cryonicists. James Hughes, the executive director of the pro-life-extension Institute for Ethics and Emerging Technologies, has not personally signed up for cryonics, calling it a worthy experiment but saying, "I value my relationship with my wife." Cryobiologist Dayong Gao has said, "People can always have hope that things will change in the future, but there is no scientific foundation supporting cryonics at this time." While it is universally agreed that personal identity is uninterrupted when brain activity temporarily ceases during incidents of accidental drowning (where people have been restored to normal functioning after being completely submerged in cold water for up to 66 minutes), one argument against cryonics is that a centuries-long absence from life might interrupt personal identity, such that the revived person would "not be themself". Maastricht University bioethicist David Shaw raises the argument that there would be no point in being revived in the far future if one's friends and families are dead, leaving them all alone, but he notes that family and friends can also be frozen, that there is "nothing to prevent the thawed-out freezee from making new friends", and that a lonely existence may be preferable to none at all. In fiction. Suspended animation is a popular subject in science fiction and fantasy settings. It is often the means by which a character is transported into the future. The characters Philip J. Fry in "Futurama" and Khan Noonien Singh in "Star Trek" exemplify this trope. A survey in Germany found that about half of the respondents were familiar with cryonics, and about half of those familiar with it had learned of it from films or television. In popular culture. The town of Nederland, Colorado, hosts an annual Frozen Dead Guy Days festival to commemorate a substandard attempt at cryopreservation. Notable people. Corpses subjected to the cryonics process include those of baseball players Ted Williams and his son John Henry Williams (in 2002 and 2004, respectively), engineer and doctor L. Stephen Coles (in 2014), economist and entrepreneur Phil Salin, and software engineer Hal Finney (in 2014). People known to have arranged for cryonics upon death include PayPal founders Luke Nosek and Peter Thiel, Oxford transhumanists Nick Bostrom and Anders Sandberg, and transhumanist philosopher David Pearce. Larry King once arranged for cryonics but, according to "Inside Edition", changed his mind. Sex offender and financier Jeffrey Epstein wanted to have his head and penis frozen after death. The corpses of some are mistakenly believed to have undergone cryonics. The urban legend that Walt Disney's remains were cryopreserved is false; they were cremated and interred at Forest Lawn Memorial Park Cemetery. Timothy Leary was a long-time cryonics advocate and signed up with a major cryonics provider, but changed his mind shortly before his death and was not cryopreserved.
6761
194203
https://en.wikipedia.org/wiki?curid=6761
Unitary patent
The European patent with unitary effect, also known as the unitary patent, is a European patent which benefits from unitary effect in the participating member states of the European Union. Unitary effect means the patent has a common legal status throughout all the participating states, eliminating scenarios in which a patent may be invalidated by courts in one participating member state yet upheld by courts in another. Unitary effect may be requested by the proprietor within one month of grant of a European patent, replacing validation of the European patent in the individual countries concerned. Infringement and revocation proceedings are conducted before the Unified Patent Court (UPC), which decisions have a uniform effect for the unitary patent in the participating member states as a whole rather than in each country individually. The unitary patent may be only limited, transferred or revoked, or lapse, in respect of all the participating Member States. Licensing is however possible for part of the unitary territory. The unitary patent may coexist with nationally enforceable patents ("classical" patents) in the non-participating states. The unitary patent's stated aims are to make access to the patent system "easier, less costly and legally secure within the European Union" and "the creation of uniform patent protection throughout the Union". European patents are granted in English, French, or German and the unitary effect will not require further translations after a transition period. The maintenance fees of the unitary patents are lower than the sum of the renewal fees for national patents of the corresponding area, being equivalent to the combined maintenance fees of Germany, France, the UK and the Netherlands (although the UK is no longer participating following Brexit). The negotiations which resulted in the unitary patent can be traced back to various initiatives dating to the 1970s. At different times, the project, or very similar projects, have been referred to as the "European Union patent" (the name used in the EU treaties, which serve as the legal basis for EU competency), "EU patent", "Community patent", "European Community Patent", "EC patent" and "COMPAT". On 17 December 2012, agreement was reached between the European Council and European Parliament on the two EU regulations that made the unitary patent possible through enhanced cooperation at EU level. The legality of the two regulations was challenged by Spain and Italy, but all their claims were rejected by the European Court of Justice. Italy subsequently joined the unitary patent regulation in September 2015, so that all EU member states except Spain and Croatia now participate in the enhanced cooperation for a unitary patent. Unitary effect of newly granted European patents will be available from the date when the related Unified Patent Court Agreement enters into force for those EU countries that have also ratified the UPC, and will extend to those participating member states for which the UPC Agreement enters into force at the time of registration of the unitary patent. Previously granted unitary patents will not automatically get their unitary effect extended to the territory of participating states which ratify the UPC agreement at a later date. The unitary patent system applies since 1 June 2023, the date of entry into force of the UPC Agreement. Background. Legislative history. In 2009, three draft documents were published regarding a community patent: a European patent in which the European Community was designated: Based on those documents, the European Council requested on 6 July 2009 an opinion from the Court of Justice of the European Union, regarding the compatibility of the envisioned Agreement with EU law: "'Is the envisaged agreement creating a Unified Patent Litigation System (currently named European and Community Patents Court) compatible with the provisions of the Treaty establishing the European Community?’" In December 2010, the use of the enhanced co-operation procedure, under which of the Treaty on the Functioning of the European Union provides that a group of member states of the European Union can choose to co-operate on a specific topic, was proposed by twelve Member States to set up a unitary patent applicable in all participating European Union Member States. The use of this procedure had only been used once in the past, for harmonising rules regarding the applicable law in divorce across several EU Member States. In early 2011, the procedure leading to the enhanced co-operation was reported to be progressing. Twenty-five Member States had written to the European Commission requesting to participate, with Spain and Italy remaining outside, primarily on the basis of ongoing concerns over translation issues. On 15 February, the European Parliament approved the use of the enhanced co-operation procedure for unitary patent protection by a vote of 471 to 160, and on 10 March 2011 the Council gave their authorisation. Two days earlier, on 8 March 2011, the Court of Justice of the European Union had issued its opinion, stating that the draft Agreement creating the European and Community Patent Court would be incompatible with EU law. The same day, the Hungarian Presidency of the Council insisted that this opinion would not affect the enhanced co-operation procedure. In November 2011, negotiations on the enhanced co-operation system were reportedly advancing rapidly—too fast, in some views. It was announced that implementation required an enabling European Regulation, and a Court agreement between the states that elect to take part. The European Parliament approved the continuation of negotiations in September. A draft of the agreement was issued on 11 November 2011 and was open to all member states of the European Union, but not to other European Patent Convention states. However, serious criticisms of the proposal remained mostly unresolved. A meeting of the Competitiveness Council on 5 December failed to agree on the final text. In particular, there was no agreement on where the Central Division of a Unified Patent Court should be located, "with London, Munich and Paris the candidate cities." The Polish Presidency acknowledged on 16 December 2011 the failure to reach an agreement "on the question of the location of the seat of the central division." The Danish Presidency therefore inherited the issue. According to the President of the European Commission in January 2012, the only question remaining to be settled was the location of the Central Division of the Court. However, evidence presented to the UK House of Commons European Scrutiny Committee in February suggested that the position was more complicated. At an EU summit at the end of January 2012, participants agreed to press on and finalise the system by June. On 26 April, Herman Van Rompuy, President of the European Council, wrote to members of the council, saying "This important file has been discussed for many years and we are now very close to a final deal... This deal is needed now, because this is an issue of crucial importance for innovation and growth. I very much hope that the last outstanding issue will be sorted out at the May Competitiveness Council. If not, I will take it up at the June European Council." The Competitiveness Council met on 30 May and failed to reach agreement. A compromise agreement on the seat(s) of the unified court was eventually reached at the June European Council (28–29 June 2012), splitting the central division according to technology between Paris (the main seat), London and Munich. However, on 2 July 2012, the European Parliament decided to postpone the vote following a move by the European Council to modify the arrangements previously approved by MEPs in negotiations with the European Council. The modification was considered controversial and included the deletion of three key articles (6–8) of the legislation, seeking to reduce the competence of the European Union Court of Justice in unitary patent litigation. On 9 July 2012, the Committee on Legal Affairs of the European Parliament debated the patent package following the decisions adopted by the General Council on 28–29 June 2012 in camera in the presence of MEP Bernhard Rapkay. A later press release by Rapkay quoted from a legal opinion submitted by the Legal Service of the European Parliament, which affirmed the concerns of MEPs to approve the decision of a recent EU summit to delete said articles as it "nullifies central aspects of a substantive patent protection". A Europe-wide uniform protection of intellectual property would thus not exist with the consequence that the requirements of the corresponding EU treaty would not be met and that the European Court of Justice could therefore invalidate the legislation. By the end of 2012 a new compromise was reached between the European Parliament and the European Council, including a limited role for the European Court of Justice. The Unified Court will apply the Unified Patent Court Agreement, which is considered national patent law from an EU law point of view, but still is equal for each participant. [However the draft statutory instrument aimed at implementation of the Unified Court and UPC in the UK provides for different infringement laws for: European patents (unitary or not) litigated through the Unified Court; European patents (UK) litigated before UK courts; and national patents]. The legislation for the enhanced co-operation mechanism was approved by the European Parliament on 11 December 2012 and the regulations were signed by the European Council and European Parliament officials on 17 December 2012. On 30 May 2011, Italy and Spain challenged the council's authorisation of the use of enhanced co-operation to introduce the trilingual (English, French, German) system for the unitary patent, which they viewed as discriminatory to their languages, with the CJEU on the grounds that it did not comply with the EU treaties. In January 2013, Advocate General Yves Bot delivered his recommendation that the court reject the complaint. Suggestions by the Advocate General are advisory only, but are generally followed by the court. The case was dismissed by the court in April 2013, however Spain launched two new challenges with the EUCJ in March 2013 against the regulations implementing the unitary patent package. The court hearing for both cases was scheduled for 1 July 2014. Advocate-General Yves Bot published his opinion on 18 November 2014, suggesting that both actions be dismissed ( and ). The court handed down its decisions on 5 May 2015 as and fully dismissing the Spanish claims. Following a request by its government, Italy became a participant of the unitary patent regulations in September 2015. European patents. European patents are granted in accordance with the provisions of the European Patent Convention (EPC), via a unified procedure before the European Patent Office (EPO). While upon filing of a European patent application, all 39 Contracting States are automatically designated, a European patent becomes a bundle of "national" European patents upon grant. In contrast to the unified character of a European patent application, a granted European patent has, in effect, no unitary character, except for the centralized opposition procedure (which can be initiated within 9 months from grant, by somebody else than the patent proprietor), and the centralized limitation and revocation procedures (which can only be instituted by the patent proprietor). In other words, a European patent in one Contracting State, i.e. a "national" European patent, is effectively independent of the same European patent in each other Contracting State, except for the opposition, limitation and revocation procedures. The enforcement of a European patent is dealt with by national law. The abandonment, revocation or limitation of the European patent in one state does not affect the European patent in other states. While the EPC already provided the possibility for a group of member states to allow European patents to have a unitary character also after grant, until now, only Liechtenstein and Switzerland have opted to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)). By requesting unitary effect within one month of grant, the patent proprietor is now able to obtain uniform protection in the participating members states of the European Union in a single step, considerably simplifying obtaining patent protection in a large part of the EU. The unitary patent system co-exists with national patent systems and European patent without unitary effects. The unitary patent does not cover EPC countries that are not member of the European Union, such as UK or Turkey. Legal basis and implementation. The implementation of the unitary patent is based on three legal instruments: Thus the unitary patent is based on EU law as well as the European Patent Convention (EPC). provides the legal basis for establishing a common system of patents for Parties to the EPC. Previously, only Liechtenstein and Switzerland had used this possibility to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)). Regulations regarding the unitary patent. The first two regulations were approved by the European Parliament on 11 December 2012, with future application set for the 25 member states then participating in the enhanced cooperation for a unitary patent (all then EU member states except Croatia, Italy and Spain). The instruments were adopted as regulations EU 1257/2012 and 1260/2012 on 17 December 2012, and entered into force in January 2013. Following a request by its government, Italy became a participant of the unitary patent regulations in September 2015. As of March 2022, neither of the two remaining non-participants in the unitary patent (Spain and Croatia) had requested the European Commission to participate. Although formally the Regulations will apply to all 25 participating states from the moment the UPC Agreement enters into force for the first group of ratifiers, the unitary effect of newly granted unitary patents will only extend to those of the 25 states where the UPC Agreement has entered into force, while patent coverage for other participating states without UPC Agreement ratification will be covered by a coexisting normal European patent in each of those states. The unitary effect of unitary patents means a single renewal fee, a single ownership, a single object of property, a single court (the Unified Patent Court) and uniform protection, which means that revocation as well as infringement proceedings are to be decided for the unitary patent as a whole rather than for each country individually. Licensing is however to remain possible for part of the unitary territory. Role of the European Patent Office. Some administrative tasks relating to the European patents with unitary effect are performed by the European Patent Office,<ref name="1257/2012-9.1"> Regulation (EU) No 1257/2012, Art. 9.1.</ref> as authorized by . These tasks include the collection of renewal fees and registration of unitary effect upon grant, recording licenses and statements that licenses are available to any person. Decisions of the European Patent Office regarding the unitary patent are open to appeal to the Unified Patent Court, rather than to the EPO Boards of Appeal. Translation requirements for the European patent with unitary effect. For a unitary patent, ultimately no translation will be required (except under certain circumstances in the event of a dispute), which is expected to significantly reduce the cost for protection in the whole area. However, Regulation 1260/2012 provides that, during a transitional period of minimum six years and no more than twelve years, one translation needs to be provided. Namely, a full translation of the European patent specification needs to be provided either into English if the language of the proceedings at the EPO was French or German, or into any other EU official language if the language of the proceedings at the EPO was English. Such translation will have no legal effect and will be "for information purposes only”. In addition, machine translations will be provided, which will be, in the words of the regulation, "for information purposes only and should not have any legal effect". Comparison with the current translation requirements for traditional bundle European patents. In several EPC contracting states, for the national part of a traditional bundle European patent (i.e., for a European patent without unitary effect), a translation has to be filed within a three-month time limit after the publication of grant in the European Patent Bulletin under , otherwise the patent is considered never to have existed (void ab initio) in that state. For the 22 parties to the London Agreement, this requirement has already been abolished or reduced (e.g. by dispensing with the requirement if the patent is available in English, and/or only requiring translation of the claims). Translation requirements for the participating states in the enhanced cooperation for a unitary patent are shown below: Unitary patent as an object of property. Article 7 of Regulation 1257/2012 provides that, as an object of property, a European patent with unitary effect will be treated "in its entirety and in all participating Member States as a national patent of the participating Member State in which that patent has unitary effect and in which the applicant had her/his residence or principal place of business or, by default, had a place of business on the date of filing the application for the European patent." When the applicant had no domicile in a participating Member State, German law will apply. Ullrich has the criticized the system, which is similar to the Community Trademark and the Community Design, as being "in conflict with both the purpose of the creation of unitary patent protection and with primary EU law." Agreement on a Unified Patent Court. The Agreement on a Unified Patent Court provides the legal basis for the Unified Patent Court (UPC): a patent court for European patents (with and without unitary effect), with jurisdiction in those countries where the Agreement is in effect. In addition to regulations regarding the court structure, it also contains substantive provisions relating to the right to prevent use of an invention and allowed use by non-patent proprietors (e.g. for private non-commercial use), preliminary and permanent injunctions. Entry into force for the UPC took place after Germany deposited its instrument of ratification of the UPC Agreement, which triggered the countdown until the Agreement's entry into force on June 1, 2023. Parties. The UPC Agreement was signed on 19 February 2013 by 24 EU member states, including all states then participating in the enhanced co-operation measures except Bulgaria and Poland. Bulgaria signed the agreement on 5 March 2013 following internal administrative procedures. Italy, which did not originally join the enhanced co-operation measures but subsequently signed up, did sign the UPC agreement. The agreement remains open to accession for all remaining EU member states, with all European Union Member States except Spain and Poland having signed the Agreement. States which do not participate in the unitary patent regulations can still become parties to the UPC agreement, which would allow the new court to handle European patents validated in the country. On 18 January 2019, Kluwer Patent Blog wrote, "a recurring theme for some years has been that 'the UPC will start next year'". Then, Brexit and German constitutional court complaint were considered as the main obstacles. The German constitutional court first decided in a decision of 13 February 2020 against the German ratification of the Agreement on the ground that the German Parliament did not vote with the required majority (2/3 according to the judgement). After a second vote and further, this time unsuccessful, constitutional complaints, Germany formally ratified the UPC Agreement on 7 August 2021. While the UK ratified the agreement in April 2018, the UK later withdrew from the Agreement following Brexit. As of the entry into force of the UPC on 1 June 2023, 17 countries had ratified the Agreement. Romania ratified the agreement in May 2024, and will join as the 18th participating member on 1 September 2024. Jurisdiction. The Unified Patent Court has exclusive jurisdiction in infringement and revocation proceedings involving European patents with unitary effect, and during a transition period non-exclusive jurisdiction regarding European patents without unitary effect in the states where the Agreement applies, unless the patent proprietor decides to opt out. It furthermore has jurisdiction to hear cases against decisions of the European Patent Office regarding unitary patents. As a court of several member states of the European Union it may (Court of First Instance) or must (Court of Appeal) ask prejudicial questions to the European Court of Justice when the interpretation of EU law (including the two unitary patent regulations, but excluding the UPC Agreement) is not obvious. Organization. The court has two instances: a court of first instance and a court of appeal. The court of appeal and the registry have their seats in Luxembourg, while the central division of the court of first instance would have its seat in Paris. The central division has a thematic branch in Munich (the London location has yet to be replaced by a new location within the EU). The court of first instance may further have local and regional divisions in all member states that wish to set up such divisions. Geographical scope of and request for unitary effect. While the regulations formally apply to all 25 member states participating in the enhanced cooperation for a unitary patent, from the date the UPC agreement has entered into force for the first group of ratifiers, unitary patents will only extend to the territory of those participating member states where the UPC Agreement had entered into force when the unitary effect was registered. If the unitary effect territory subsequently expands to additional participating member states for which the UPC Agreement later enters into force, this will be reflected for all subsequently registered unitary patents, but the territorial scope of the unitary effect of existing unitary patents will not be extended to these states. Unitary effect can be requested up to one month after grant of the European patent directly at the EPO, with retroactive effect from the date of grant. However, according to the "Draft Rules Relating to Unitary Patent Protection", unitary effect would be registered only if the European patent has been granted with the same set of claims for all the 25 participating member states in the regulations, whether the unitary effect applies to them or not. European patents automatically become a bundle of "national" European patents upon grant. Upon the grant of unitary effect, the "national" European patents will retroactively be considered to never have existed in the territories where the unitary patent has effect. The unitary effect does not affect "national" European patents in states where the unitary patent does not apply. Any "national" European patents applying outside the "unitary effect" zone will co-exist with the unitary patent. Special territories of participating member states. As the unitary patent is introduced by an EU regulation, it is expected to not only be valid in the mainland territory of the participating member states that are party to the UPC, but also in those of their special territories that are part of the European Union. As of April 2014, this includes the following fourteen territories: In addition to the territories above, the European Patent Convention has been extended by two member states participating in the enhanced cooperation for a unitary patent to cover some of their dependent territories outside the European Union: In following of those territories, the unitary patent is de facto extended through application of national (French, or Dutch) law: However, the unitary patent does not apply in the French territories French Polynesia and New Caledonia as implementing legislation would need to be passed by those jurisdictions (rather than the French national legislation required in the other territories) and this has not been done. Costs. The renewal fees are planned to be based on the cumulative renewal fees due in the four countries where European patents were most often validated in 2015 (Germany, France, the UK and the Netherlands). This is despite the UK leaving the unitary patent system following Brexit. The renewal fees of the unitary patent would thus be ranging from 35 Euro in the second year to 4855 in the 20th year. The renewal fees will be collected by the EPO, with the EPO keeping 50% of the fees and the other 50% being redistributed to the participating member states. Translation requirements as well as the requirement to pay yearly patent maintenance fees in individual countries presently renders the European patent system costly to obtain protection in the whole of the European Union. In an impact assessment from 2011, the European Commission estimated that the costs of obtaining a patent in all 27 EU countries would drop from over 32 000 euro (mainly due to translation costs) to 6 500 euro (for the combination of an EU, Spanish and Italian patent) due to introduction of the Unitary patent. Per capita costs of an EU patent were estimated at just 6 euro/million in the original 25 participating countries (and 12 euro/million in the 27 EU countries for protection with a Unitary, Italian and Spanish patent). How the EU Commission has presented the expected cost savings has however been sharply criticized as exaggerated and based on unrealistic assumptions. The EU Commission has notably considered the costs for validating a European patent in 27 countries while in reality only about 1% of all granted European patents are currently validated in all 27 EU states. Based on more realistic assumptions, the cost savings are expected to be much lower than actually claimed by the commission. For example, the EPO calculated that for an average EP patent validated and maintained in 4 countries, the overall savings to be between 3% and 8%. Statistics. During the first year of the unitary patent, that is, from 1 June 2023, to 31 May 2024, more than 27500 European patents with unitary effect have been registered. This corresponds to almost a quarter of all European patents granted during that period. Earlier attempts. 1970s and 1980s: proposed Community Patent Convention. Work on a Community patent started in the 1970s, but the resulting Community Patent Convention (CPC) was a failure. The "Luxembourg Conference on the Community Patent" took place in 1975 and the Convention for the European Patent for the common market, or (Luxembourg) Community Patent Convention (CPC), was signed at Luxembourg on 15 December 1975, by the 9 member states of the European Economic Community at that time. However, the CPC never entered into force. It was not ratified by enough countries. Fourteen years later, the Agreement relating to Community patents was made at Luxembourg on 15 December 1989. It attempted to revive the CPC project, but also failed. This Agreement consisted of an amended version of the original Community Patent Convention. Twelve states signed the Agreement: Belgium, Denmark, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal, Spain, and United Kingdom. All of those states would need to have ratified the Agreement to cause it to enter into force, but only seven did so: Denmark, France, Germany, Greece, Luxembourg, the Netherlands, and United Kingdom. Nevertheless, a majority of member states of the EEC at that time introduced some harmonisation into their national patent laws in anticipation of the entry in force of the CPC. A more substantive harmonisation took place at around the same time to take account of the European Patent Convention and the Strasbourg Convention. 2000 to 2004: EU Regulation proposal. In 2000, renewed efforts from the European Union resulted in a Community Patent Regulation proposal, sometimes abbreviated as CPR. It provides that the patent, once it has been granted by the European Patent Office (EPO) in one of its procedural languages (English, German or French) and published in that language, with a translation of the claims into the two other procedural languages, will be valid without any further translation. This proposal is aimed to achieve a considerable reduction in translation costs. Nevertheless, additional translations could become necessary in legal proceedings against a suspected infringer. In such a situation, a suspected infringer who has been unable to consult the text of the patent in the official language of the Member State in which he is domiciled, is presumed, until proven otherwise, not to have knowingly infringed the patent. To protect a suspected infringer who, in such a situation, has not acted in a deliberate manner, it is provided that the proprietor of the patent will not be able to obtain damages in respect of the period prior to the translation of the patent being notified to the infringer. The proposed Community Patent Regulation should also establish a court holding exclusive jurisdiction to invalidate issued patents; thus, a Community Patent's validity will be the same in all EU member states. This court will be attached to the present European Court of Justice and Court of First Instance through use of provisions in the Treaty of Nice. Discussion regarding the Community patent had made clear progress in 2003 when a political agreement was reached on 3 March 2003. However, one year later in March 2004 under the Irish presidency, the Competitiveness Council failed to agree on the details of the Regulation. In particular the time delays for translating the claims and the authentic text of the claims in case of an infringement remained problematic issues throughout discussions and in the end proved insoluble. In view of the difficulties in reaching an agreement on the community patent, other legal agreements have been proposed outside the European Union legal framework to reduce the cost of translation (of patents when granted) and litigation, namely the London Agreement, which entered into force on 1 May 2008—and which has reduced the number of countries requiring translation of European patents granted nowadays under the European Patent Convention, and the corresponding costs to obtain a European patent—and the European Patent Litigation Agreement (EPLA), a proposal that has now lapsed. Reactions to the failure. After the council in March 2004, EU Commissioner Frits Bolkestein said that "The failure to agree on the Community Patent I am afraid undermines the credibility of the whole enterprise to make Europe the most competitive economy in the world by 2010." Adding: Jonathan Todd, Commission's Internal Market spokesman, declared: European Commission President Romano Prodi, asked to evaluate his five-year term, cited as his weak point the failure of many EU governments to implement the "Lisbon Agenda", agreed in 2001. In particular, he cited the failure to agree on a Europewide patent, or even the languages to be used for such a patent, "because member states did not accept a change in the rules; they were not coherent". Since 2005: stalemate and new debate. Thus, in 2005, the Community patent looked unlikely to be implemented in the near future. However, on 16 January 2006 the European Commission "launched a public consultation on how future action in patent policy to create an EU-wide system of protection can best take account of stakeholders' needs." The Community patent was one of the issues the consultation focused on. More than 2500 replies were received. According to the European Commission, the consultation showed that there is widespread support for the Community patent but not at any cost, and "in particular not on the basis of the Common Political Approach reached by EU Ministers in 2003". In February 2007, EU Commissioner Charlie McCreevy was quoted as saying: The European Commission released a white paper in April 2007 seeking to "improve the patent system in Europe and revitalise the debate on this issue." On 18 April 2007, at the European Patent Forum in Munich, Germany, Günter Verheugen, vice-president of the European Commission, said that his proposal to support the European economy was "to have the London Agreement ratified by all member states, and to have a European patent judiciary set up, in order to achieve rapid implementation of the Community patent, which is indispensable". He further said that he believed this could be done within five years. In October 2007, the Portuguese presidency of the Council of the European Union proposed an EU patent jurisdiction, "borrowing heavily from the rejected draft European Patent Litigation Agreement (EPLA)". In November 2007, EU ministers were reported to have made some progress towards a community patent legal system, with "some specific results" expected in 2008. In 2008, the idea of using machine translations to translate patents was proposed to solve the language issue, which is partially responsible for blocking progress on the community patent. Meanwhile, European Commissioner for Enterprise and Industry Günter Verheugen declared at the European Patent Forum in May 2008 that there was an "urgent need" for a community patent. Agreement in December 2009, and language issue. In December 2009, it was reported that the Swedish EU presidency had achieved a breakthrough in negotiations concerning the community patent. The breakthrough was reported to involve setting up a single patent court for the EU, however ministers conceded much work remained to be done before the community patent would become a reality. According to the agreed plan, the EU would accede to the European Patent Convention as a contracting state, and patents granted by the European Patent Office will, when validated for the EU, have unitary effect in the territory of the European Union. On 10 November 2010, it was announced that no agreement had been reached and that, "in spite of the progress made, [the Competitiveness Council of the European Union had] fallen short of unanimity by a small margin," with commentators reporting that the Spanish representative, citing the aim to avoid any discrimination, had "re-iterated at length the stubborn rejection of the Madrid Government of taking the 'Munich' three languages regime (English, German, French) of the European Patent Convention (EPC) as a basis for a future EU Patent."
6763
1398
https://en.wikipedia.org/wiki?curid=6763
Cistron
A cistron is a region of DNA that is conceptually equivalent to some definitions of a gene, such that the terms are synonymous from certain viewpoints, especially with regard to the molecular gene as contrasted with the Mendelian gene. The question of which scope of a subset of DNA (that is, how large a segment of DNA) constitutes a unit of selection is the question that governs whether cistrons are the same thing as genes. The word "cistron" is used to emphasize that molecular genes exhibit a specific behavior in a complementation test (cis-trans test); distinct positions (or loci) within a genome are cistronic. History. The words "cistron" and "gene" were coined before the advancing state of biology made it clear to many people that the concepts they refer to, at least in some senses of the word "gene", are either equivalent or nearly so. The same historical naming practices are responsible for many of the synonyms in the life sciences. The term "cistron" was coined by Seymour Benzer in an article entitled "The elementary units of heredity". The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test. Richard Dawkins in his influential book "The Selfish Gene" argues "against" the cistron being the unit of selection and against it being the best definition of a gene. (He also argues against group selection.) He does not argue against the existence of cistrons, or their being elementary, but rather against the idea that natural selection selects them; he argues that it used to, back in earlier eras of life's development, but not anymore. He defines a gene as a larger unit, which others may now call gene clusters, as the unit of selection. He also defines replicators, more general than cistrons and genes, in this gene-centered view of evolution. Definition. Defining a Cistron as a segment of DNA coding for a polypeptide, the structural gene in a transcription unit could be said as monocistronic (mostly in eukaryotes) or polycistronic (mostly in bacteria and prokaryotes). For example, suppose a mutation at a chromosome position formula_1 is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism will exhibit the wild type phenotype (ordinary trait) unless both chromosomes of a pair have the mutation (homozygous mutation). Similarly, suppose a mutation at another position, formula_2, is responsible for the same recessive trait. The positions formula_1 and formula_2 are said to be within the same cistron when an organism that has the mutation at formula_1 on one chromosome and has the mutation at position formula_2 on the paired chromosome exhibits the recessive trait even though the organism is not homozygous for either mutation. When instead the wild type trait is expressed, the positions are said to belong to distinct cistrons / genes. Or simply put, mutations on the same cistrons will not complement; as opposed to mutations on different cistrons may complement (see Benzer's T4 bacteriophage experiments T4 rII system). For example, an operon is a stretch of DNA that is transcribed to create a contiguous segment of RNA, but contains more than one cistron / gene. The operon is said to be polycistronic, whereas ordinary genes are said to be monocistronic.
6766
1298091217
https://en.wikipedia.org/wiki?curid=6766
Commonwealth
A commonwealth is a traditional English term for a political community founded for the common good. The noun "commonwealth", meaning "public welfare, general good or advantage", dates from the 15th century. Originally a phrase (the common-wealth or the common wealth – echoed in the modern synonym "public wealth"), it comes from the old meaning of "wealth", which is "well-being", and was deemed analogous to the Latin "res publica". The term literally meant "common well-being". In the 17th century, the definition of "commonwealth" expanded from its original sense of "public welfare" or "commonweal" to mean "a state in which the supreme power is vested in the people; a republic or democratic state". The term evolved to become a title to a number of political entities. Three countries – Australia, the Bahamas, and Dominica – have the official title "Commonwealth", as do four U.S. states and two U.S. territories. Since the early 20th century, the term has been used to name some fraternal associations of states, most notably the Commonwealth of Nations, an organisation primarily of former territories of the British Empire. It is also used in the translation for the organisation made up of formerly Soviet states, the Commonwealth of Independent States. Historical use. Rome. Translations of Ancient Roman writers' works to English have on occasion translated "Res publica", and variants thereof, to "the commonwealth", a term referring to the Roman state as a whole. England. The Commonwealth of England was the official name of the political unit ("de facto" military rule in the name of parliamentary supremacy) that replaced the Kingdom of England (after the English Civil War) from 1649 to 1653 and 1659 to 1660, under the rule of Oliver Cromwell and his son and successor Richard. From 1653 to 1659, although still legally known as a Commonwealth, the republic, united with the former Kingdom of Scotland, operated under different institutions (at times as a "de facto" monarchy) and is known by historians as the Protectorate. In a British context, it is sometimes referred to as the "Old Commonwealth". In the later 20th century a socialist political party known as the Common Wealth Party was active. Previously a similarly named party, the Commonwealth Land Party, was in existence. Iceland. The period of Icelandic history from the establishment of the Althing in 930 to the pledge of fealty to the Norwegian king in 1262 is usually called the "Icelandic Nation" () in Icelandic and the "Icelandic Commonwealth" in English. In this period Iceland was colonized by a public consisting largely of recent immigrants from Norway who had fled the unification of that country under King Harald Fairhair. Philippines. The Commonwealth of the Philippines was the administrative body that governed the Philippines from 1935 to 1946, aside from a period of exile in the Second World War from 1942 to 1945 when Japan occupied the country. It replaced the Insular Government, a United States territorial government, and was established by the Tydings–McDuffie Act. The Commonwealth was designed as a transitional administration in preparation for the country's full achievement of independence, which was achieved in 1946. The Commonwealth of the Philippines was a founding member of the United Nations. Poland–Lithuania. "Republic" is still an alternative translation of the traditional name "Rzeczpospolita" of the Polish–Lithuanian Commonwealth. Wincenty Kadłubek (Vincent Kadlubo, 1160–1223) used for the first time the original Latin term "res publica" in the context of Poland in his "Chronicles of the Kings and Princes of Poland". The name was used officially for the confederal union formed by Poland and Lithuania (1569–1795). It is also often referred as "Nobles' Commonwealth" (1505–1795, i.e., before the union). In the contemporary political doctrine of the Polish–Lithuanian Commonwealth, "our state is a Republic (or Commonwealth) under the presidency of the King". The Commonwealth introduced a doctrine of religious tolerance called Warsaw Confederation, had its own parliament "Sejm" (although elections were restricted to nobility and elected kings, who were bound to certain contracts "Pacta conventa" from the beginning of the reign). "A commonwealth of good counsaile" was the title of the 1607 English translation of the work of Wawrzyniec Grzymała Goślicki "De optimo senatore" that presented to English readers many of the ideas present in the political system of the Polish–Lithuanian Commonwealth. Catalonia. Between 1914 and 1925, Catalonia was an autonomous region of Spain. Its government during that time was given the title "mancomunidad" (Catalan: "mancomunitat"), which is translated into English as "commonwealth". The Commonwealth of Catalonia had limited powers and was formed as a federation of the four Catalan provinces. A number of Catalan-language institutions were created during its existence. Liberia. Between 1838 and 1847, Liberia was officially known as the "Commonwealth of Liberia". It changed its name to the "Republic of Liberia" when it declared independence (and adopted a new constitution) in 1847. Current use. Australia. "Commonwealth" was first proposed as a term for a federation of the six Australian crown colonies at the 1891 constitutional convention in Sydney. Its adoption was initially controversial, as it was associated by some with the republicanism of Oliver Cromwell (see above), but it was retained in all subsequent drafts of the constitution. The term was finally incorporated into law in the "Commonwealth of Australia Constitution Act 1900", which established the federation. Australia operates under a federal system, in which power is divided between the federal (national) government and the state governments (the successors of the six colonies). So, in an Australian context, the term "Commonwealth" (capitalised), which is often abbreviated to Cth, refers to the federal government, and "Commonwealth of Australia" is the official name of the country. The Bahamas. The Bahamas, a Commonwealth realm, has used the official style "Commonwealth of The Bahamas" since its independence in 1973. Dominica. The small Caribbean republic of Dominica has used the official style "Commonwealth of Dominica" since 1978. Certain U.S. states and territories. States. Four states of the United States of America officially designate themselves as "commonwealths". All four were part of Great Britain's possessions along the Atlantic coast of North America prior to the American Revolution. As such, they share a strong influence of English common law in some of their laws and institutions. The four are: Territories. Two organized but unincorporated U.S. territories are called commonwealths. The two are: In 2016, the Washington, D.C. city council also selected "Douglass Commonwealth" as the potential name of State of Washington, D.C., following the 2016 statehood referendum, at least partially in order to retain the initials "D.C." as the state's abbreviation. International bodies. Commonwealth of Nations. The Commonwealth of Nations—formerly the British Commonwealth—is a voluntary association of 56 independent sovereign states, most of which were once part of the British Empire. The Commonwealth's membership includes both republics and monarchies. The Head of the Commonwealth is King Charles III, who also reigns as monarch directly in the 15 member states known as Commonwealth realms since his accession in 2022. Commonwealth of Independent States. The Commonwealth of Independent States (CIS) is a loose alliance or confederation consisting of nine of the 15 former Soviet Republics, the exceptions being Turkmenistan (a CIS associate member), Lithuania, Latvia, Estonia, Ukraine, and Georgia. Georgia left the CIS in August 2008 following the 2008 invasion of the Russian military into South Ossetia and Abkhazia. Its creation signalled the dissolution of the Soviet Union, its purpose being to "allow a civilised divorce" between the Soviet Republics. The CIS has developed as a forum by which the member-states can co-operate in economics, defence, and foreign policy. Proposed use. United Kingdom. Labour MP Tony Benn sponsored a Commonwealth of Britain Bill several times between 1991 and 2001, intended to abolish the monarchy and establish a British republic. It never reached second reading.
6767
18779361
https://en.wikipedia.org/wiki?curid=6767
Commodore 1541
The Commodore 1541 (also known as the CBM 1541 and VIC-1541) is a floppy disk drive which was made by Commodore International for the Commodore 64 (C64), Commodore's most popular home computer. The best-known floppy disk drive for the C64, the 1541 is a single-sided 170-kilobyte drive for 5¼" disks. The 1541 directly followed the Commodore 1540 (meant for the VIC-20). The disk drive uses group coded recording (GCR) and contains a MOS Technology 6502 microprocessor, doubling as a disk controller and on-board disk operating system processor. The number of sectors per track varies from 17 to 21 (an early implementation of zone bit recording with 4 constant angular velocity zones). The drive's built-in disk operating system is CBM DOS 2.6. History. Introduction. The 1541 was priced at under at its introduction. A C64 with a 1541 cost about $900, while an Apple II with no disk drive cost $1,295. The first 1541 drives produced in 1982 have a label on the front reading VIC-1541 and an off-white case to match the VIC-20. In 1983, the 1541 switched to the familiar beige case and a front label reading simply "1541" along with rainbow stripes to match the Commodore 64. By 1983, a 1541 sold for $300 or less. After a home computer price war instigated by Commodore, the C64 and 1541 together cost under $500. The drive became very popular and difficult to find. The company said that the shortage occurred because 90% of C64 owners bought the 1541 compared to its 30% expectation, but the press discussed what "Creative Computing" described as "an absolutely alarming return rate" because of defects. The magazine reported in March 1984 that it received three defective drives in two weeks, and "Compute!'s Gazette" reported in December 1983 that four of the magazine's seven drives had failed; "COMPUTE! Publications sorely needs additional 1541s for in-house use, yet we can't find any to buy. After numerous phone calls over several days, we were able to locate only two units in the entire continental United States", reportedly because of Commodore's attempt to resolve a manufacturing issue that caused the high failures. The early (1982 to 1983) 1541s have a spring-eject mechanism (Alps drive), and the disks often fail to release. This style of drive has the popular nickname "Toaster Drive", because it requires the use of a knife or other hard thin object to pry out the stuck media, just like a piece of toast stuck in an actual toaster. This was fixed later when Commodore changed the vendor of the drive mechanism (Mitsumi) and adopted the flip-lever Newtronics mechanism, greatly improving reliability. In addition, Commodore made the drive's controller board smaller and reduced its chip count compared to the early 1541s (which had a large PCB running the length of the case, with dozens of TTL chips). The beige-case Newtronics 1541 was produced from 1984 to 1986. Versions and third-party clones. All but the very earliest non-II model 1541s can use either the Alps or Newtronics mechanism. Visually, the first models, of the "VIC-1541" denomination, have an off-white color like the VIC-20 and VIC-1540. Then, to match the look of the C64, CBM changed the drive's color to brown-beige and the name to "Commodore 1541". The 1541's numerous shortcomings opened a market for a number of third-party clones of the disk drive. Examples include the "Oceanic OC-118" a.k.a. "Excelerator+", the MSD Super Disk single and dual drives, the "Enhancer 2000", the "Indus GT", Blue Chip Electronics's BCD/5.25, and "CMD"s "FD-2000" and "FD-4000". Nevertheless, the 1541 became the first disk drive to see widespread use in the home and Commodore sold millions of the units. In 1986, Commodore released the 1541C, a revised version that offers quieter and slightly more reliable operation and a light beige case matching the color scheme of the Commodore 64C. It was replaced in 1988 by the 1541-II, which uses an external power supply to provide cooler operation and allows the drive to have a smaller desktop footprint (the power supply "brick" being placed elsewhere, typically on the floor). Later ROM revisions fixed assorted problems, including a software bug that causes the save-and-replace command to corrupt data. Successors. The Commodore 1570 is an upgrade from the 1541 for use with the Commodore 128, available in Europe. It offers MFM capability for accessing CP/M disks, improved speed, and somewhat quieter operation, but was only manufactured until Commodore got its production lines going with the 1571, the double-sided drive. Finally, the small, external-power-supply-based, MFM-based Commodore 1581 3½-inch drive was made, giving 800 KB access to the C128 and C64. Design. Hardware. The 1541 does not have DIP switches to change the device number. If a user adds more than one drive to a system, the user has to cut a trace in the circuit board to permanently change the drive's device number, or hand-wire an external switch to allow it to be changed externally. It is also possible to change the drive number via a software command, which is temporary and would be erased as soon as the drive was powered off. 1541 drives at power up always default to device #8. If multiple drives in a chain are used, then the startup procedure is to power on the first drive in the chain, alter its device number via a software command to the highest number in the chain (if three drives were used, then the first drive in the chain would be set to device #10), then power on the next drive, alter its device number to the next lowest, and repeat the procedure until the final drive at the end of the chain was powered on and left as device #8. Unlike the Apple II, where support for two drives is normal, it is relatively uncommon for Commodore software to support this setup, and the CBM DOS copy file command is not able to copy files between drives – a third party copy utility is necessary. The pre-II 1541s also have an internal power source, which generates a lot of heat. The heat generation was a frequent source of humour. For example, "Compute!" stated in 1988 that "Commodore 64s used to be a favorite with amateur and professional chefs since they could compute and cook on top of their 1500-series disk drives at the same time". A series of humorous tips in "MikroBitti" in 1989 said "When programming late, coffee and kebab keep nicely warm on top of the 1541." The "MikroBitti" review of the 1541-II said that its external power source "should end the jokes about toasters". The drive-head mechanism installed in the early production years is notoriously easy to misalign. The most common cause of the 1541's drive head knocking and subsequent misalignment is copy-protection schemes on commercial software. The main cause of the problem is that the disk drive itself does not feature any means of detecting when the read/write head reaches track zero. Accordingly, when a disk is not formatted or a disk error occurs, the unit tries to move the head 40 times in the direction of track zero (although the 1541 DOS only uses 35 tracks, the drive mechanism itself is a 40-track unit, so this ensured track zero would be reached no matter where the head was before). Once track zero is reached, every further attempt to move the head in that direction would cause it to be rammed against a solid stop: for example, if the head happened to be on track 18 (where the directory is located) before this procedure, the head would be actually moved 18 times, and then rammed against the stop 22 times. This ramming gives the characteristic "machine gun" noise and sooner or later throws the head out of alignment. A defective head-alignment part likely caused many of the reliability issues in early 1541 drives; one dealer told "Compute!s Gazette" in 1983 that the part had caused all but three of several hundred drive failures that he had repaired. The drives were so unreliable that "Info" magazine joked, "Sometimes it seems as if one of the original design specs ... must have said 'Mean time between failure: 10 accesses.'" Users can realign the drive themselves with a software program and a calibration disk. The user can remove the drive from its case and then loosen the screws holding the stepper motor that move the head, then with the calibration disk in the drive gently turn the stepper motor back and forth until the program shows a good alignment. The screws are then tightened and the drive is put back into its case. A third-party fix for the 1541 appeared in which the solid head stop was replaced by a sprung stop, giving the head a much easier life. The later 1571 drive (which is 1541-compatible) incorporates track-zero detection by photo-interrupter and is thus immune to the problem. Also, a software solution, which resides in the drive controller's ROM, prevents the rereads from occurring, though this can cause problems when genuine errors do occur. Due to the alignment issues on the Alps drive mechanisms, Commodore switched suppliers to Newtronics in 1984. The Newtronics mechanism drives have a lever rather than a pull-down tab to close the drive door. Although the alignment issues were resolved after the switch, the Newtronics drives add a new reliability problem in that many of the read/write heads are improperly sealed, causing moisture to penetrate the head and short it out. The 1541's PCB consists mainly of a 6502 CPU, two 6522 VIA chips, and 2 KB of work RAM. Up to 48 KB of RAM can be added; this is mainly useful for defeating copy protection schemes since an entire disk track could be loaded into drive RAM, while the standard 2 KB only accommodates a few sectors (theoretically eight, but some of the RAM was used by CBM DOS as work space). Some Commodore users use 1541s as an impromptu math coprocessor by uploading math-intensive code to the drive for background processing. Interface. The 1541 uses a proprietary serialized derivative of the IEEE-488 parallel interface, found in previous disk drives for the PET/CBM range of personal and business computers, but when the VIC-20 was in development, a cheaper alternative to the expensive IEEE-488 cables was sought. To ensure a ready supply of inexpensive cabling for its home computer peripherals, Commodore chose standard DIN connectors for the serial interface. Disk drives and other peripherals such as printers connect to the computer via a daisy chain setup, necessitating only a single connector on the computer itself. Throughput and software. "IEEE Spectrum" in 1985 stated that: The C-64's designers blamed the 1541's slow speed on the marketing department's insistence that the computer be compatible with the 1540, which is slow because of a flaw in the 6522 VIA interface controller. Initially, Commodore intended to use a hardware shift register (one component of the 6522) to maintain fast drive speeds with the new serial interface. However, a hardware bug with this chip prevents the initial design from working as anticipated, and the ROM code was hastily rewritten to handle the entire operation in software. According to Jim Butterfield, this causes a speed reduction by a factor of five; had 1540 compatibility not been a requirement, the disk interface would have been much faster. In any case, the C64 normally cannot work with a 1540 unless the VIC-II display output is disabled via a register write to the DEN bit (register $D011, bit 4), which stops the halting of the CPU during certain video lines to ensure correct serial timing. As implemented on the VIC-20 and C64, Commodore DOS transfers 512 bytes per second, compared to the Atari 810's 1,000 bytes per second, the Apple Disk II's 15,000 bytes per second, and the 300-baud data rate of the Commodore Datasette storage system. About 20 minutes are needed to copy one disk—10 minutes of reading time, and 10 minutes of writing time. However, since both the computer and the drive can easily be reprogrammed, third parties quickly wrote more efficient firmware that would speed up drive operations drastically. Without hardware modifications, some "fast loader" utilities (which bypassed routines in the 1541's onboard ROM) managed to achieve speeds of up to 2.5 kilobytes per second. The most common of these products are the Epyx Fast Load, the Final Cartridge, and the Action Replay plug-in ROM cartridges, which all have machine code monitor and disk editor software on board as well. The popular Commodore computer magazines of the era also entered the arena with type-in fast-load utilities, with "Compute!'s Gazette" publishing "TurboDisk" in 1985 and "RUN" publishing "Sizzle" in 1987. Even though each 1541 has its own on-board disk controller and disk operating system, it is not possible for a user to command two 1541 drives to copy a disk (one drive reading and the other writing) as with older dual drives like the 4040 that was often found with the PET computer, and which the 1541 is backward-compatible with (it can read 4040 disks but not write to them as a minor difference in the number of header bytes makes the 4040 and 1541 only read-compatible). Originally, to copy from drive to drive, software running on the C64 was needed and it would first read from one drive into computer memory, then write out to the other. Only when Fast Hack'em and, later, other disk backup programs were released, was true drive-to-drive copying possible for a pair of 1541s. The user could, if they wished, unplug the C64 from the drives (i.e., from the first drive in the daisy chain) and do something else with the computer as the drives proceeded to copy the entire disk. Media. The 1541 drive uses standard 5¼-inch double-density floppy media; high-density media will not work due to its different magnetic coating requiring a higher magnetic coercivity. As the GCR encoding scheme does not use the index hole, the drive was also compatible with hard-sectored disks. The standard CBM DOS format is 170 KB with 35 tracks and 256-byte sectors. It is similar to the format used on the PET 2031, 2040 & 4040 drives, but a minor difference in the number of header bytes makes these drives and the 1541 only read-compatible; disks formatted with one drive cannot be written to by the other. The drives will allow writes to occur, but the inconsistent header size will damage the data in the data portions of each track. The 4040 drives use Shugart SA-400s, which were 35-track units, thus the format there is due to physical limitations of the drive mechanism. The 1541 uses 40-track mechanisms, but Commodore intentionally limited the CBM DOS format to 35 tracks because of reliability issues with the early units. It is possible via low-level programming to move the drive head to tracks 36–40 and write on them, this is sometimes done by commercial software for copy protection purposes and/or to get additional data on the disk. However, one track is reserved by DOS for directory and file allocation information (the BAM, block availability map). And since for normal files, two bytes of each physical sector are used by DOS as a pointer to the next physical track and sector of the file, only 254 out of the 256 bytes of a block are used for file contents. If the disk side is not otherwise prepared with a custom format, (e.g. for data disks), 664 blocks would be free after formatting, giving 664254 = (or almost ) for user data. By using custom formatting and load/save routines (sometimes included in third-party DOSes, see below), all of the mechanically possible 40 tracks can be used. Owing to the drive's non-use of the index hole, it is also possible to make "flippy floppies" by inserting the diskette upside-down and formatting the other side, and it is commonplace and normal for commercial software to be distributed on such disks. Tracks 36–42 are non-standard. The bitrate is the raw one between the read/write head and signal circuitry so actual useful data rate is a factor 5/4 less due to GCR encoding. The 1541 disk typically has 35 tracks. Track 18 is reserved; the remaining tracks are available for data storage. The header is on 18/0 (track 18, sector 0) along with the BAM, and the directory starts on 18/1 (track 18, sector 1). The file interleave is 10 blocks, while the directory interleave is 3 blocks. Header contents: The header is similar to other Commodore disk headers, the structural differences being the BAM offset () and size, and the label+ID+type offset (). $00–01 T/S reference to first directory sector (18/1) 02 DOS version ('A') 04–8F BAM entries (4 bytes per track: Free Sector Count + 24 bits for sectors) 90–9F Disk Label, $A0 padded A2–A3 Disk ID A5–A6 DOS type ('2A') Uses. Early copy protection schemes deliberately introduce read errors on the disk, the software refusing to load unless the correct error message is returned. The general idea is that simple disk-copy programs are incapable of copying the errors. When one of these errors is encountered, the disk drive (as do many floppy disk drives) will attempt one or more reread attempts after first resetting the head to track zero. Few of these schemes have much deterrent effect, as various software companies soon released "nibbler" utilities that enable protected disks to be copied and, in some cases, the protection removed. Commodore copy protection sometimes fails on specific hardware configurations. "Gunship", for example, does not load if a second disk drive or printer is connected to the computer. Similarly "Roland's Ratrace" will crash if additional hardware is detected. The tape version will even crash if a floppy drive is switched on while the game is running.
6769
18779361
https://en.wikipedia.org/wiki?curid=6769
Commodore 1581
The Commodore 1581 is a 3½-inch double-sided double-density floppy disk drive that was released by Commodore Business Machines (CBM) in 1987, primarily for its C64 and C128 home/personal computers. The drive stores 800 kilobytes using an MFM encoding but formats different from the MS-DOS (720 KB), Amiga (880 KB), and Mac Plus (800 KB) formats. With special software it's possible to read C1581 disks on an x86 PC system, and likewise, read MS-DOS and other formats of disks in the C1581 (using Big Blue Reader), provided that the PC or other floppy handles the size format. This capability was most frequently used to read MS-DOS disks. The drive was released in the summer of 1987 and quickly became popular with bulletin board system (BBS) operators and other users. Like the 1541 and 1571, the 1581 has an onboard MOS Technology 6502 CPU with its own ROM and RAM, and uses a serial version of the IEEE-488 interface. Inexplicably, the drive's ROM contains commands for parallel use, although no parallel interface was available. Unlike the 1571, which is nearly 100% backward-compatible with the 1541, the 1581 is only compatible with previous Commodore drives at the DOS level and cannot utilize software that performs low-level disk access (as the vast majority of Commodore 64 games do). The version of Commodore DOS built into the 1581 added support for partitions, which could also function as fixed-allocation subdirectories. PC-style subdirectories were rejected as being too difficult to work with in terms of block availability maps, which were still very much in vogue, and which for some time had been the traditional way of inquiring into block availability. The 1581 supports the C128's burst mode for fast disk access, but not when connected to an older Commodore machine like the Commodore 64. The 1581 provides a total of 3160 blocks free when formatted (a block being equal to 256 bytes). The number of permitted directory entries was also increased, to 296 entries. With a storage capacity of 800 KB, the 1581 is the highest-capacity serial-bus drive that was ever made by Commodore (the 1-MB SFD-1001 uses the parallel IEEE-488), and the only 3½-inch one. However, starting in 1991, Creative Micro Designs (CMD) made the FD-2000 high density (1.6 MB) and FD-4000 extra-high density (3.2 MB) 3½-inch drives, both of which offered not only a 1581-emulation mode but also 1541- and 1571-compatibility modes. Like the 1541 and 1571, a nearly identical job queue is available to the user in zero page (except for job 0), providing for exceptional degrees of compatibility. Unlike the cases of the 1541 and 1571, the low-level disk format used by the 1581 is similar enough to the MS-DOS format as the 1581 is built around a WD1770 FM/MFM floppy controller chip. The 1581 disk format consists of 80 tracks and ten 512 byte sectors per track, per side, used as 40 logical sectors of 256 bytes each. Special software is required to read 1581 disks on a PC due to the different file system. An internal floppy drive and controller are required as well; USB floppy drives operate strictly at the file system level and do not allow low-level disk access. The WD1770 controller chip, however, was the seat of some early problems with 1581 drives when the first production runs were recalled due to a high failure rate; the problem was quickly corrected. Later versions of the 1581 drive have a smaller, more streamlined-looking external power supply provided with them. Specifications. 1581 Image Layout. The 1581 disk has 80 logical tracks, each with 40 logical sectors (the actual physical layout of the diskette is abstracted and managed by a hardware translation layer). The directory starts on 40/3 (track 40, sector 3). The disk header is on 40/0, and the BAM (block availability map) resides on 40/1 and 40/2. $00–01 T/S reference to first directory sector (40/3) 02 DOS version ('D') 04–13 Disk Label, $A0 padded 16–17 Disk ID 19–1A DOS type ('3D') $00–01 T/S to next BAM sector (40/2) 02 DOS version ('D') 04–05 Disk ID 06 I/O byte 07 Autoboot flag 10–FF BAM entries for Tracks 1–40 $00–01 00/FF 02 DOS version ('D') 04–05 Disk ID 06 I/O byte 07 Autoboot flag 10–FF BAM entries for Tracks 41–80
6771
19372301
https://en.wikipedia.org/wiki?curid=6771
College football
College football is gridiron football that is played by teams of amateur student-athletes at universities and colleges. It was through collegiate competition that gridiron football first gained popularity in the United States. Like gridiron football generally, college football is most popular in the United States and Canada. While no single governing body exists for college football in the United States, most schools, especially those at the highest levels of play, are members of the NCAA. In Canada, collegiate football competition is governed by U Sports for universities. The Canadian Collegiate Athletic Association (for colleges) governs soccer and other sports but not gridiron football. Other countries, such as Mexico, Japan and South Korea, also host college football leagues with modest levels of support. Unlike most other major sports in North America, no official minor league farm organizations exist for American football or Canadian football. Therefore, college football is generally considered to be the second tier of American and Canadian football; ahead of high school competition, but below professional competition. In some parts of the United States, especially the South and Midwest, college football is more popular than professional football. For much of the 20th century, college football was generally considered to be more prestigious than professional football. The overwhelming majority of professional football players in the NFL and other leagues previously played college football. The NFL draft each spring sees 224 players selected and offered a contract to play in the league, with the vast majority coming from the NCAA. Other professional leagues, such as the CFL and UFL, additionally hold their own drafts each year which also see primarily college players selected. Players who are not selected can still attempt to obtain a professional roster spot as an undrafted free agent. Despite these opportunities, only around 1.6% of NCAA college football players end up playing professionally in the NFL. History. Even after the emergence of the professional National Football League (NFL), college football has remained extremely popular throughout the U.S. Although the college game has a much larger margin for talent than its pro counterpart, the sheer number of fans following major colleges provides a financial equalizer for the game, with Division I programs – the highest level – playing in huge stadiums, six of which have seating capacity exceeding 100,000 people. In many cases, college stadiums employ bench-style seating, as opposed to individual seats with backs and arm rests (although many stadiums do have a small number of chair back seats in addition to the bench seating). This allows them to seat more fans in a given amount of space than the typical professional stadium, which tends to have more features and comforts for fans. Only three stadiums owned by U.S. colleges or universities, L&N Stadium at the University of Louisville, Center Parc Stadium at Georgia State University, and FAU Stadium at Florida Atlantic University, consist entirely of chair back seating. College athletes, unlike players in the NFL, are not permitted by the NCAA to be paid salaries. Colleges are only allowed to provide non-monetary compensation such as athletic scholarships that provide for tuition, housing, and books. With new bylaws made by the NCAA, college athletes can now receive "name, image, and likeness" (NIL) deals, a way to get sponsorships and money before their pro debut. Rugby football in Great Britain and Canada. Modern North American football has its origins in various games, all known as "football", played at public schools in Great Britain in the mid-19th century. By the 1840s, students at Rugby School were playing a game in which players were able to pick up the ball and run with it, a sport later known as rugby football. The game was taken to Canada by British soldiers stationed there and was soon being played at Canadian colleges. The first documented gridiron football game was played at University College, a college of the University of Toronto, on November 9, 1861. One of the participants in the game involving University of Toronto students was William Mulock, later chancellor of the school. A football club was formed at the university soon afterward, although its rules of play then are unclear. In 1864, at Trinity College, also a college of the University of Toronto, F. Barlow Cumberland and Frederick A. Bethune devised rules based on rugby football. Modern Canadian football is widely regarded as having originated with a game played in Montreal, in 1865, when British Army officers played local civilians. The game gradually gained a following, and the Montreal Football Club was formed in 1868, the first recorded non-university football club in Canada. American college football. Early games appear to have had much in common with the traditional "mob football" played in Great Britain. The games remained largely unorganized until the 19th century, when intramural games of football began to be played on college campuses. Each school played its own variety of football. Princeton University students played a game called "ballown" as early as 1820. In 1827, a Harvard tradition known as "Bloody Monday" began, which consisted of a mass ballgame between the freshman and sophomore classes. In 1860, both the town police and the college authorities agreed the Bloody Monday had to go. Harvard students responded by going into mourning for a mock figure called "Football Fightum", for whom they conducted funeral rites. The authorities held firm, and it was another dozen years before football was once again played at Harvard. Dartmouth played its own version called "Old division football", the rules of which were first published in 1871, though the game dates to at least the 1830s. All of these games, and others, shared certain commonalities. They remained largely "mob" style games, with huge numbers of players attempting to advance the ball into a goal area, often by any means necessary. Rules were simple, and violence and injury were common. The violence of these mob-style games led to widespread protests and a decision to abandon them. Yale, under pressure from the city of New Haven, banned the play of all forms of football in 1860. American football historian Parke H. Davis described the period between 1869 and 1875 as the 'Pioneer Period'; the years 1876–93 he called the 'Period of the American Intercollegiate Football Association'; and the years 1894–1933 he dubbed the "Period of Rules Committees and Conferences". Princeton–Columbia–Yale–Rutgers. On November 6, 1869, Rutgers University faced Princeton University, then known as the College of New Jersey, in the first collegiate football game. The game more closely resembled soccer than football as it is played in the 21st century. It was played with a round ball, and used a set of rules suggested by Rutgers captain William J. Leggett, based on The Football Association's first set of rules, which were an early attempt by the former pupils of England's public schools, to unify the rules of their various public schools. The game was played at a Rutgers Field in New Brunswick, New Jersey. Two teams of 25 players attempted to score by kicking the ball into the opposing team's goal. Throwing or carrying the ball was not allowed, but there was plenty of physical contact between players. The first team to reach six goals was declared the winner. Rutgers won by a score of six to four. A rematch was played at Princeton a week later under Princeton's own set of rules (one notable difference was the awarding of a "free kick" to any player that caught the ball on the fly, which was a feature adopted from The Football Association's rules; the fair catch kick rule has survived through to modern American game). Princeton won that game by a score of 8 – 0. Columbia joined the series in 1870 and by 1872 several schools were fielding intercollegiate teams, including Yale and Stevens Institute of Technology. Columbia University was the third school to field a team. The Lions traveled from New York City to New Brunswick on November 12, 1870, and were defeated by Rutgers 6 to 3. The game suffered from disorganization and the players kicked and battled each other as much as the ball. Later in 1870, Princeton and Rutgers played again with Princeton defeating Rutgers 6–0. This game's violence caused such an outcry that no games at all were played in 1871. Football came back in 1872, when Columbia played Yale for the first time. The Yale team was coached and captained by David Schley Schaff, who had learned to play football while attending Rugby School. Schaff himself was injured and unable to play the game, but Yale won the game 3–0 nonetheless. Later in 1872, Stevens Tech became the fifth school to field a team. Stevens lost to Columbia, but beat both New York University and City College of New York during the following year. By 1873, the college students playing football had made significant efforts to standardize their fledgling game. Teams had been scaled down from 25 players to 20. The only way to score was still to bat or kick the ball through the opposing team's goal, and the game was played in two 45-minute halves on fields 140 yards long and 70 yards wide. On October 20, 1873, representatives from Yale, Columbia, Princeton, and Rutgers met at the Fifth Avenue Hotel in New York City to codify the first set of intercollegiate football rules. Before this meeting, each school had its own set of rules and games were usually played using the home team's own particular code. At this meeting, a list of rules, based more on the Football Association's rules than the rules of the recently founded Rugby Football Union, was drawn up for intercollegiate football games. Harvard–McGill (1874). Old "Football Fightum" had been resurrected at Harvard in 1872, when Harvard resumed playing football. Harvard, however, preferred to play a rougher version of football called "the Boston Game" in which the kicking of a round ball was the most prominent feature though a player could run with the ball, pass it, or dribble it (known as "babying"). The man with the ball could be tackled, although hitting, tripping, "hacking" and other unnecessary roughness was prohibited. There was no limit to the number of players, but there were typically ten to fifteen per side. A player could carry the ball only when being pursued. As a result of this, Harvard refused to attend the rules conference organized by Rutgers, Princeton and Columbia at the Fifth Avenue Hotel in New York City on October 20, 1873, to agree on a set of rules and regulations that would allow them to play a form of football that was essentially Association football; and continued to play under its own code. While Harvard's voluntary absence from the meeting made it hard for them to schedule games against other American universities, it agreed to a challenge to play the rugby team of McGill University, from Montreal, in a two-game series. It was agreed that two games would be played on Harvard's Jarvis baseball field in Cambridge, Massachusetts on May 14 and 15, 1874: one to be played under Harvard rules, another under the stricter rugby regulations of McGill. Jarvis Field was at the time a patch of land at the northern point of the Harvard campus, bordered by Everett and Jarvis Streets to the north and south, and Oxford Street and Massachusetts Avenue to the east and west. Harvard beat McGill in the "Boston Game" on the Thursday and held McGill to a 0–0 tie on the Friday. The Harvard students took to the rugby rules and adopted them as their own, The games featured a round ball instead of a rugby-style oblong ball. This series of games represents an important milestone in the development of the modern game of American football. In October 1874, the Harvard team once again traveled to Montreal to play McGill in rugby, where they won by three tries. In as much as Rugby football had been transplanted to Canada from England, the McGill team played under a set of rules which allowed a player to pick up the ball and run with it whenever he wished. Another rule, unique to McGill, was to count tries (the act of grounding the football past the opposing team's goal line; there was no end zone during this time), as well as goals, in the scoring. In the Rugby rules of the time, a try only provided the attempt to kick a free goal from the field. If the kick was missed, the try did not score any points itself. Harvard–Tufts, Harvard–Yale (1875). Harvard quickly took a liking to the rugby game, and its use of the try which, until that time, was not used in American football. The try would later evolve into the score known as the touchdown. On June 4, 1875, Harvard faced Tufts University in the first game between two American colleges played under rules similar to the McGill/Harvard contest, which was won by Tufts 1–0. The rules included each side fielding 11 men at any given time, the ball was advanced by kicking or carrying it, and tackles of the ball carrier stopped play – actions of which have carried over to the modern version of football played today. Harvard later challenged its closest rival, Yale, to which the Bulldogs accepted. The two teams agreed to play under a set of rules called the "Concessionary Rules", which involved Harvard conceding something to Yale's soccer and Yale conceding a great deal to Harvard's rugby. They decided to play with 15 players on each team. On November 13, 1875, Yale and Harvard played each other for the first time ever, where Harvard won 4–0. At the first The Game (as the annual contest between Harvard and Yale came to be named) the future "father of American football" Walter Camp was among the 2000 spectators in attendance. Walter, a native of New Britain, Connecticut, would enroll at Yale the next year. He was torn between an admiration for Harvard's style of play and the misery of the Yale defeat, and became determined to avenge Yale's defeat. Spectators from Princeton also carried the game back home, where it quickly became the most popular version of football. On November 23, 1876, representatives from Harvard, Yale, Princeton, and Columbia met at the Massasoit House hotel in Springfield, Massachusetts to standardize a new code of rules based on the rugby game first introduced to Harvard by McGill University in 1874. Three of the schools—Harvard, Columbia, and Princeton—formed the Intercollegiate Football Association, as a result of the meeting. Yale initially refused to join this association because of a disagreement over the number of players to be allowed per team (relenting in 1879) and Rutgers were not invited to the meeting. The rules that they agreed upon were essentially those of rugby union at the time with the exception that points be awarded for scoring a try, not just the conversion afterwards (extra point). Incidentally, rugby was to make a similar change to its scoring system 10 years later. Walter Camp: Father of American football. Walter Camp is widely considered to be the most important figure in the development of American football. As a youth, he excelled in sports like track, baseball, and association football, and after enrolling at Yale in 1876, he earned varsity honors in every sport the school offered. Following the introduction of rugby-style rules to American football, Camp became a fixture at the Massasoit House conventions where rules were debated and changed. Dissatisfied with what seemed to him to be a disorganized mob, he proposed his first rule change at the first meeting he attended in 1878: a reduction from fifteen players to eleven. The motion was rejected at that time but passed in 1880. The effect was to open up the game and emphasize speed over strength. Camp's most famous change, the establishment of the line of scrimmage and the snap from center to quarterback, was also passed in 1880. Originally, the snap was executed with the foot of the center. Later changes made it possible to snap the ball with the hands, either through the air or by a direct hand-to-hand pass. Rugby league followed Camp's example, and in 1906 introduced the play-the-ball rule, which greatly resembled Camp's early scrimmage and center-snap rules. In 1966, rugby league introduced a four-tackle rule (changed in 1972 to a six-tackle rule) based on Camp's early down-and-distance rules. Camp's new scrimmage rules revolutionized the game, though not always as intended. Princeton, in particular, used scrimmage play to slow the game, making incremental progress towards the end zone during each down. Rather than increase scoring, which had been Camp's original intent, the rule was exploited to maintain control of the ball for the entire game, resulting in slow, unexciting contests. At the 1882 rules meeting, Camp proposed that a team be required to advance the ball a minimum of five yards within three downs. These down-and-distance rules, combined with the establishment of the line of scrimmage, transformed the game from a variation of rugby football into the distinct sport of American football. Camp was central to several more significant rule changes that came to define American football. In 1881, the field was reduced in size to its modern dimensions of 120 by 53 yards (109.7 by 48.8 meters). Several times in 1883, Camp tinkered with the scoring rules, finally arriving at four points for a touchdown, two points for kicks after touchdowns, two points for safeties, and five for field goals. Camp's innovations in the area of point scoring influenced rugby union's move to point scoring in 1890. In 1887, game time was set at two-halves of 45 minutes each. Also in 1887, two paid officials—a referee and an umpire—were mandated for each game. A year later, the rules were changed to allow tackling below the waist, and in 1889, the officials were given whistles and stopwatches. After leaving Yale in 1882, Camp was employed by the New Haven Clock Company until his death in 1925. Though no longer a player, he remained a fixture at annual rules meetings for most of his life, and he personally selected an annual All-American team every year from 1889 through 1924. The Walter Camp Football Foundation continues to select All-American teams in his honor. Expansion. College football expanded greatly during the last two decades of the 19th century. Several major rivalries date from this time period. November 1890 was an active time in the sport. In Baldwin City, Kansas, on November 22, 1890, college football was first played in the state of Kansas. Baker beat Kansas 22–9. On the 27th, Vanderbilt played Nashville (Peabody) at Athletic Park and won 40–0. It was the first time organized football played in the state of Tennessee. The 29th also saw the first instance of the Army–Navy Game. Navy won 24–0. East. Rutgers was first to extend the reach of the game. An intercollegiate game was first played in the state of New York when Rutgers played Columbia on November 2, 1872. It was also the first scoreless tie in the history of the fledgling sport. Yale football starts the same year and has its first match against Columbia, the nearest college to play football. It took place at Hamilton Park in New Haven and was the first game in New England. The game was essentially soccer with 20-man sides, played on a field 400 by 250 feet. Yale wins 3–0, Tommy Sherman scoring the first goal and Lew Irwin the other two. After the first game against Harvard, Tufts took its squad to Bates College in Lewiston, Maine for the first football game played in Maine. This occurred on November 6, 1875. Penn's Athletic Association was looking to pick "a twenty" to play a game of football against Columbia. This "twenty" never played Columbia, but did play twice against Princeton. Princeton won both games 6 to 0. The first of these happened on November 11, 1876, in Philadelphia and was the first intercollegiate game in the state of Pennsylvania. Brown entered the intercollegiate game in 1878. The first game where one team scored over 100 points happened on October 25, 1884, when Yale routed Dartmouth 113–0. It was also the first time one team scored over 100 points and the opposing team was shut out. The next week, Princeton outscored Lafayette 140 to 0. The first intercollegiate game in the state of Vermont happened on November 6, 1886, between Dartmouth and Vermont at Burlington, Vermont. Dartmouth won 91 to 0. Penn State played its first season in 1887, but had no head coach for their first five years, from 1887 to 1891. The teams played its home games on the Old Main lawn on campus in State College, Pennsylvania. They compiled a 12–8–1 record in these seasons, playing as an independent from 1887 to 1890. In 1891, the Pennsylvania Intercollegiate Football Association (PIFA) was formed. It consisted of Bucknell University, Dickinson College, Franklin & Marshall College, Haverford College, Penn State, and Swarthmore College. Lafayette College, and Lehigh University were excluded because it was felt they would dominate the Association. Penn State won the championship with a 4–1–0 record. Bucknell's record was 3–1–1 (losing to Franklin & Marshall and tying Dickinson). The Association was dissolved prior to the 1892 season. The first nighttime football game was played in Mansfield, Pennsylvania on September 28, 1892, between Mansfield State Normal and Wyoming Seminary and ended at halftime in a 0–0 tie. The Army–Navy game of 1893 saw the first documented use of a football helmet by a player in a game. Joseph M. Reeves had a crude leather helmet made by a shoemaker in Annapolis and wore it in the game after being warned by his doctor that he risked death if he continued to play football after suffering an earlier kick to the head. Middle West. In 1879, the University of Michigan became the first school west of Pennsylvania to establish a college football team. On May 30, 1879, Michigan beat Racine College 1–0 in a game played in Chicago. The "Chicago Daily Tribune" called it "the first rugby-football game to be played west of the Alleghenies." Other Midwestern schools soon followed suit, including the University of Chicago, Northwestern University, and the University of Minnesota. The first western team to travel east was the 1881 Michigan team, which played at Harvard, Yale and Princeton. The nation's first college football league, the Intercollegiate Conference of Faculty Representatives (also known as the Western Conference), a precursor to the Big Ten Conference, was founded in 1895. Led by coach Fielding H. Yost, Michigan became the first "western" national power. From 1901 to 1905, Michigan had a 56-game undefeated streak that included a 1902 trip to play in the first college football bowl game, which later became the Rose Bowl Game. During this streak, Michigan scored 2,831 points while allowing only 40. Organized intercollegiate football was first played in the state of Minnesota on September 30, 1882, when Hamline was convinced to play Minnesota. Minnesota won 2 to 0. It was the first game west of the Mississippi River. November 30, 1905, saw Chicago defeat Michigan 2 to 0. Dubbed "The First Greatest Game of the Century", it broke Michigan's 56-game unbeaten streak and marked the end of the "Point-a-Minute" years. South. Organized collegiate football was first played in the state of Virginia and the south on November 2, 1873, in Lexington between Washington and Lee and VMI. Washington and Lee won 4–2. Some industrious students of the two schools organized a game for October 23, 1869, but it was rained out. Students of the University of Virginia were playing pickup games of the kicking-style of football as early as 1870, and some accounts even claim it organized a game against Washington and Lee College in 1871; but no record has been found of the score of this contest. Due to scantiness of records of the prior matches some will claim Virginia v. Pantops Academy November 13, 1887, as the first game in Virginia. On April 9, 1880, at Stoll Field, Transylvania University (then called Kentucky University) beat Centre College by the score of –0 in what is often considered the first recorded game played in the South. The first game of "scientific football" in the South was the first instance of the Victory Bell rivalry between North Carolina and Duke (then known as Trinity College) held on Thanksgiving Day, 1888, at the North Carolina State Fairgrounds in Raleigh, North Carolina. On November 13, 1887, the Virginia Cavaliers and Pantops Academy fought to a scoreless tie in the first organized football game in the state of Virginia. Students at UVA were playing pickup games of the kicking-style of football as early as 1870, and some accounts even claim that some industrious ones organized a game against Washington and Lee College in 1871, just two years after Rutgers and Princeton's historic first game in 1869. But no record has been found of the score of this contest. Washington and Lee also claims a 4 to 2 win over VMI in 1873. On October 18, 1888, the Wake Forest Demon Deacons defeated the North Carolina Tar Heels 6 to 4 in the first intercollegiate game in the state of North Carolina. On December 14, 1889, Wofford defeated Furman 5 to 1 in the first intercollegiate game in the state of South Carolina. The game featured no uniforms, no positions, and the rules were formulated before the game. January 30, 1892, saw the first football game played in the Deep South when the Georgia Bulldogs defeated Mercer 50–0 at Herty Field. The beginnings of the contemporary Southeastern Conference and Atlantic Coast Conference start in 1894. The Southern Intercollegiate Athletic Association (SIAA) was founded on December 21, 1894, by William Dudley, a chemistry professor at Vanderbilt. The original members were Alabama, Auburn, Georgia, Georgia Tech, North Carolina, , and Vanderbilt. Clemson, Cumberland, Kentucky, LSU, Mercer, Mississippi, Mississippi A&M (Mississippi State), Southwestern Presbyterian University, Tennessee, Texas, Tulane, and the University of Nashville joined the following year in 1895 as invited charter members. The conference was originally formed for "the development and purification of college athletics throughout the South". The first forward pass in football likely occurred on October 26, 1895, in a game between Georgia and North Carolina when, out of desperation, the ball was thrown by the North Carolina back Joel Whitaker instead of punted and George Stephens caught the ball. On November 9, 1895, John Heisman executed a hidden ball trick using quarterback Reynolds Tichenor to get Auburn's only touchdown in a 6 to 9 loss to Vanderbilt. It was the first game in the south decided by a field goal. Heisman later used the trick against Pop Warner's Georgia team. Warner picked up the trick and later used it at Cornell against Penn State in 1897. He then used it in 1903 at Carlisle against Harvard and garnered national attention. The 1899 Sewanee Tigers are one of the all-time great teams of the early sport. The team went 12–0, outscoring opponents 322 to 10. Known as the "Iron Men", with just 13 men they had a six-day road trip with five shutout wins over Texas A&M; Texas; Tulane; LSU; and Ole Miss. It is recalled memorably with the phrase "... and on the seventh day they rested." Grantland Rice called them "the most durable football team I ever saw." Organized intercollegiate football was first played in the state of Florida in 1901. A 7-game series between intramural teams from Stetson and Forbes occurred in 1894. The first intercollegiate game between official varsity teams was played on November 22, 1901. Stetson beat Florida Agricultural College at Lake City, one of the four forerunners of the University of Florida, 6–0, in a game played as part of the Jacksonville Fair. On September 27, 1902, Georgetown beat Navy 4 to 0. It is claimed by Georgetown authorities as the game with the first ever "roving center" or linebacker when Percy Given stood up, in contrast to the usual tale of Germany Schulz. The first linebacker in the South is often considered to be Frank Juhan. On Thanksgiving Day 1903, a game was scheduled in Montgomery, Alabama between the best teams from each region of the Southern Intercollegiate Athletic Association for an "SIAA championship game", pitting Cumberland against Heisman's Clemson. The game ended in an 11–11 tie causing many teams to claim the title. Heisman pressed hardest for Cumberland to get the claim of champion. It was his last game as Clemson head coach. 1904 saw big coaching hires in the south: Mike Donahue at Auburn, John Heisman at Georgia Tech, and Dan McGugin at Vanderbilt were all hired that year. Both Donahue and McGugin just came from the north that year, Donahue from Yale and McGugin from Michigan, and were among the initial inductees of the College Football Hall of Fame. The undefeated 1904 Vanderbilt team scored an average of 52.7 points per game, the most in college football that season, and allowed just four points. Southwest. The first college football game in Oklahoma Territory occurred on November 7, 1895, when the "Oklahoma City Terrors" defeated the Oklahoma Sooners 34 to 0. The Terrors were a mix of Methodist college and high school students. The Sooners did not manage a single first down. By next season, Oklahoma coach John A. Harts had left to prospect for gold in the Arctic. Organized football was first played in the territory on November 29, 1894, between the Oklahoma City Terrors and Oklahoma City High School. The high school won 24 to 0. Pacific Coast. The University of Southern California first fielded an American football team in 1888. Playing its first game on November 14 of that year against the Alliance Athletic Club, in which USC gained a 16–0 victory. Frank Suffel and Henry H. Goddard were playing coaches for the first team which was put together by quarterback Arthur Carroll; who in turn volunteered to make the pants for the team and later became a tailor. USC faced its first collegiate opponent the following year in fall 1889, playing St. Vincent's College to a 40–0 victory. In 1893, USC joined the Intercollegiate Football Association of Southern California (the forerunner of the SCIAC), which was composed of USC, Occidental College, Throop Polytechnic Institute (Caltech), and Chaffey College. Pomona College was invited to enter, but declined to do so. An invitation was also extended to Los Angeles High School. In 1891, the first Stanford football team was hastily organized and played a four-game season beginning in January 1892 with no official head coach. Following the season, Stanford captain John Whittemore wrote to Yale coach Walter Camp asking him to recommend a coach for Stanford. To Whittemore's surprise, Camp agreed to coach the team himself, on the condition that he finish the season at Yale first. As a result of Camp's late arrival, Stanford played just three official games, against San Francisco's Olympic Club and rival California. The team also played exhibition games against two Los Angeles area teams that Stanford does not include in official results. Camp returned to the East Coast following the season, then returned to coach Stanford in 1894 and 1895. On December 25, 1894, Amos Alonzo Stagg's Chicago Maroons agreed to play Camp's Stanford football team in San Francisco in the first postseason intersectional contest, foreshadowing the modern bowl game. Future president Herbert Hoover was Stanford's student financial manager. Chicago won 24 to 4. Stanford won a rematch in Los Angeles on December 29 by 12 to 0. The Big Game between Stanford and California is the oldest college football rivalry in the West. The first game was played on San Francisco's Haight Street Grounds on March 19, 1892, with Stanford winning 14–10. The term "Big Game" was first used in 1900, when it was played on Thanksgiving Day in San Francisco. During that game, a large group of men and boys, who were observing from the roof of the nearby S.F. and Pacific Glass Works, fell into the fiery interior of the building when the roof collapsed, resulting in 13 dead and 78 injured. On December 4, 1900, the last victim of the disaster (Fred Lilly) died, bringing the death toll to 22; and, to this day, the "Thanksgiving Day Disaster" remains the deadliest accident to kill spectators at a U.S. sporting event. The University of Oregon began playing American football in 1894 and played its first game on March 24, 1894, defeating Albany College 44–3 under head coach Cal Young. Cal Young left after that first game and J.A. Church took over the coaching position in the fall for the rest of the season. Oregon finished the season with two additional losses and a tie, but went undefeated the following season, winning all four of its games under head coach Percy Benson. In 1899, the Oregon football team left the state for the first time, playing the California Golden Bears in Berkeley, California. American football at Oregon State University started in 1893 shortly after athletics were initially authorized at the college. Athletics were banned at the school in May 1892, but when the strict school president, Benjamin Arnold, died, President John Bloss reversed the ban. Bloss's son William started the first team, on which he served as both coach and quarterback. The team's first game was an easy 63–0 defeat over the home team, Albany College. In May 1900, Yost was hired as the football coach at Stanford University, and, after traveling home to West Virginia, he arrived in Palo Alto, California, on August 21, 1900. Yost led the 1900 Stanford team to a 7–2–1, outscoring opponents 154 to 20. The next year in 1901, Yost was hired by Charles A. Baird as the head football coach for the Michigan Wolverines football team. On January 1, 1902, Yost's dominating 1901 Michigan Wolverines football team agreed to play a 3–1–2 team from Stanford University in the inaugural "Tournament East-West football game" what is now known as the "Rose Bowl Game" by a score of 49–0 after Stanford captain Ralph Fisher requested to quit with eight minutes remaining. The 1905 season marked the first meeting between Stanford and USC. Consequently, Stanford is USC's oldest existing rival. The Big Game between Stanford and Cal on November 11, 1905, was the first played at Stanford Field, with Stanford winning 12–5. In 1906, citing concerns about the violence in American Football, universities on the West Coast, led by California and Stanford, replaced the sport with rugby union. At the time, the future of American football was very much in doubt and these schools believed that rugby union would eventually be adopted nationwide. Other schools followed suit and also made the switch included Nevada, St. Mary's, Santa Clara, and USC (in 1911). However, due to the perception that West Coast football was inferior to the game played on the East Coast anyway, East Coast and Midwest teams shrugged off the loss of the teams and continued playing American football. With no nationwide movement, the available pool of rugby teams to play remained small. The schools scheduled games against local club teams and reached out to rugby union powers in Australia, New Zealand, and especially, due to its proximity, Canada. The annual Big Game between Stanford and California continued as rugby, with the winner invited by the British Columbia Rugby Union to a tournament in Vancouver over the Christmas holidays, with the winner of that tournament receiving the Cooper Keith Trophy. During 12 seasons of playing rugby union, Stanford was remarkably successful: the team had three undefeated seasons, three one-loss seasons, and an overall record of 94 wins, 20 losses, and 3 ties for a winning percentage of .816. However, after a few years, the school began to feel the isolation of its newly adopted sport, which was not spreading as many had hoped. Students and alumni began to clamor for a return to American football to allow wider intercollegiate competition. The pressure at rival California was stronger (especially as the school had not been as successful in the Big Game as they had hoped), and in 1915 California returned to American football. As reasons for the change, the school cited rule change back to American football, the overwhelming desire of students and supporters to play American football, interest in playing other East Coast and Midwest schools, and a patriotic desire to play an "American" game. California's return to American football increased the pressure on Stanford to also change back in order to maintain the rivalry. Stanford played its 1915, 1916, and 1917 "Big Games" as rugby union against Santa Clara and California's football "Big Game" in those years was against Washington, but both schools desired to restore the old traditions. The onset of American involvement in World War I gave Stanford an out: In 1918, the Stanford campus was designated as the Students' Army Training Corps headquarters for all of California, Nevada, and Utah, and the commanding officer Sam M. Parker decreed that American football was the appropriate athletic activity to train soldiers and rugby union was dropped. Mountain West. The University of Colorado began playing American football in 1890. Colorado found much success in its early years, winning eight Colorado Football Association Championships (1894–97, 1901–08). The following was taken from the "Silver & Gold" newspaper of December 16, 1898. It was a recollection of the birth of Colorado football written by one of CU's original gridders, John C. Nixon, also the school's second captain. It appears here in its original form: In 1909, the Rocky Mountain Athletic Conference was founded, featuring four members: Colorado, Colorado College, Colorado School of Mines, and Colorado Agricultural College. The University of Denver and the University of Utah joined the RMAC in 1910. For its first thirty years, the RMAC was considered a major conference equivalent to today's Division I, before 7 larger members left and formed the Mountain States Conference (also called the Skyline Conference). Violence, formation of NCAA. College football increased in popularity through the remainder of the 19th and early 20th century. It also became increasingly violent. Between 1890 and 1905, 330 college athletes died as a direct result of injuries sustained on the football field. These deaths could be attributed to the mass formations and gang tackling that characterized the sport in its early years. The 1894 Harvard–Yale game, known as the "Hampden Park Blood Bath", resulted in crippling injuries for four players; the contest was suspended until 1897. The annual Army–Navy game was suspended from 1894 to 1898 for similar reasons. One of the major problems was the popularity of mass-formations like the flying wedge, in which a large number of offensive players charged as a unit against a similarly arranged defense. The resultant collisions often led to serious injuries and sometimes even death. Georgia fullback Richard Von Albade Gammon notably died on the field from concussions received against Virginia in 1897, causing Georgia, Georgia Tech, and Mercer to suspend their football programs. The situation came to a head in 1905 when there were 19 fatalities nationwide. President Theodore Roosevelt reportedly threatened to shut down the game if drastic changes were not made. However, the threat by Roosevelt to eliminate football is disputed by sports historians. What is absolutely certain is that on October 9, 1905, Roosevelt held a meeting of football representatives from Harvard, Yale, and Princeton. Though he lectured on eliminating and reducing injuries, he never threatened to ban football. He also lacked the authority to abolish football and was, in fact, actually a fan of the sport and wanted to preserve it. The President's sons were also playing football at the college and secondary levels at the time. Meanwhile, John H. Outland held an experimental game in Wichita, Kansas that reduced the number of scrimmage plays to earn a first down from four to three in an attempt to reduce injuries. The "Los Angeles Times" reported an increase in punts and considered the game much safer than regular play but that the new rule was not "conducive to the sport". In 1906, President Roosevelt organized a meeting among thirteen school leaders at the White House to find solutions to make the sport safer for the athletes. Because the college officials could not agree upon a change in rules, it was decided over the course of several subsequent meetings that an external governing body should be responsible. Finally, on December 28, 1905, 62 schools met in New York City to discuss rule changes to make the game safer. As a result of this meeting, the Intercollegiate Athletic Association of the United States was formed in 1906. The IAAUS was the original rule-making body of college football, but would go on to sponsor championships in other sports. The IAAUS would get its current name of National Collegiate Athletic Association (NCAA) in 1910, and still sets rules governing the sport. The rules committee considered widening the playing field to "open up" the game, but Harvard Stadium (the first large permanent football stadium) had recently been built at great expense; it would be rendered useless by a wider field. The rules committee legalized the forward pass instead. Though it was underused for years, this proved to be one of the most important rule changes in the establishment of the modern game. Another rule change banned "mass momentum" plays (many of which, like the infamous "flying wedge", were sometimes literally deadly). Modernization and innovation (1906–1930). As a result of the 1905–1906 reforms, mass formation plays became illegal and forward passes legal. Bradbury Robinson, playing for visionary coach Eddie Cochems at Saint Louis University, threw the first legal pass in a September 5, 1906, game against Carroll College at Waukesha. Other important changes, formally adopted in 1910, were the requirements that at least seven offensive players be on the line of scrimmage at the time of the snap, that there be no pushing or pulling, and that interlocking interference (arms linked or hands on belts and uniforms) was not allowed. These changes greatly reduced the potential for collision injuries. Several coaches emerged who took advantage of these sweeping changes. Amos Alonzo Stagg introduced such innovations as the huddle, the tackling dummy, and the pre-snap shift. Other coaches, such as Pop Warner and Knute Rockne, introduced new strategies that still remain part of the game. Besides these coaching innovations, several rules changes during the first third of the 20th century had a profound impact on the game, mostly in opening up the passing game. In 1914, the first roughing-the-passer penalty was implemented. In 1918, the rules on eligible receivers were loosened to allow eligible players to catch the ball anywhere on the field—previously strict rules were in place allowing passes to only certain areas of the field. Scoring rules also changed during this time: field goals were lowered to three points in 1909 and touchdowns raised to six points in 1912. Star players that emerged in the early 20th century include Jim Thorpe, Red Grange, and Bronko Nagurski; these three made the transition to the fledgling NFL and helped turn it into a successful league. Sportswriter Grantland Rice helped popularize the sport with his poetic descriptions of games and colorful nicknames for the game's biggest players, including Notre Dame's "Four Horsemen" backfield and Fordham University's linemen, known as the "Seven Blocks of Granite". In 1907 at Champaign, Illinois Chicago and Illinois played in the first game to have a halftime show featuring a marching band. Chicago won 42–6. On November 25, 1911 Kansas played at Missouri in the first homecoming football game. The game was "broadcast" play-by-play over telegraph to at least 1,000 fans in Lawrence, Kansas. It ended in a 3–3 tie. The game between West Virginia and Pittsburgh on October 8, 1921, saw the first live radio broadcast of a college football game when Harold W. Arlin announced that year's Backyard Brawl played at Forbes Field on KDKA. Pitt won 21–13. On October 28, 1922, Princeton and Chicago played the first game to be nationally broadcast on radio. Princeton won 21–18 in a hotly contested game which had Princeton dubbed the "Team of Destiny". Rise of the South. One publication claims "The first scouting done in the South was in 1905, when Dan McGugin and Captain Innis Brown, of Vanderbilt went to Atlanta to see Sewanee play Georgia Tech." Fuzzy Woodruff claims Davidson was the first in the south to throw a legal forward pass in 1906. The following season saw Vanderbilt execute a double pass play to set up the touchdown that beat Sewanee in a meeting of the unbeaten for the SIAA championship. Grantland Rice cited this event as the greatest thrill he ever witnessed in his years of watching sports. Vanderbilt coach Dan McGugin in "Spalding's Football Guide" summation of the season in the SIAA wrote "The standing. First, Vanderbilt; second, Sewanee, a might good second;" and that Aubrey Lanier "came near winning the Vanderbilt game by his brilliant dashes after receiving punts." Bob Blake threw the final pass to center Stein Stone, catching it near the goal among defenders. Honus Craig then ran in the winning touchdown. Heisman shift. Using the "jump shift" offense, John Heisman's Georgia Tech Golden Tornado won 222 to 0 over Cumberland on October 7, 1916, at Grant Field in the most lopsided victory in college football history. Tech went on a 33-game winning streak during this period. The 1917 team was the first national champion from the South, led by a powerful backfield. It also had the first two players from the Deep South selected first-team All-American in Walker Carpenter and Everett Strupper. Pop Warner's Pittsburgh Panthers were also undefeated, but declined a challenge by Heisman to a game. When Heisman left Tech after 1919, his shift was still employed by protégé William Alexander. Notable intersectional games. In 1906, Vanderbilt defeated Carlisle 4 to 0, the result of a Bob Blake field goal. In 1907 Vanderbilt fought Navy to a 6 to 6 tie. In 1910 Vanderbilt held defending national champion Yale to a scoreless tie. Helping Georgia Tech's claim to a title in 1917, the Auburn Tigers held undefeated, Chic Harley-led Big Ten champion Ohio State to a scoreless tie the week before Georgia Tech beat the Tigers 68 to 7. The next season, with many players gone due to World War I, a game was finally scheduled at Forbes Field with Pittsburgh. The Panthers, led by freshman Tom Davies, defeated Georgia Tech 32 to 0. Tech center Bum Day was the first player on a Southern team ever selected first-team All-American by Walter Camp. 1917 saw the rise of another Southern team in Centre of Danville, Kentucky. In 1921 Bo McMillin-led Centre upset defending national champion Harvard 6 to 0 in what is widely considered one of the greatest upsets in college football history. The next year Vanderbilt fought Michigan to a scoreless tie at the inaugural game at Dudley Field (now Vanderbilt Stadium), the first stadium in the South made exclusively for college football. Michigan coach Fielding Yost and Vanderbilt coach Dan McGugin were brothers-in-law, and the latter the protégé of the former. The game featured the season's two best defenses and included a goal line stand by Vanderbilt to preserve the tie. Its result was "a great surprise to the sporting world". Commodore fans celebrated by throwing some 3,000 seat cushions onto the field. The game features prominently in Vanderbilt's history. That same year, Alabama upset Penn 9 to 7. Vanderbilt's line coach then was Wallace Wade, who coached Alabama to the South's first Rose Bowl victory in 1925. This game is commonly referred to as "the game that changed the south". Wade followed up the next season with an undefeated record and Rose Bowl tie. Georgia's 1927 "dream and wonder team" defeated Yale for the first time. Georgia Tech, led by Heisman protégé William Alexander, gave the dream and wonder team its only loss, and the next year were national and Rose Bowl champions. The Rose Bowl included Roy Riegels' wrong-way run. On October 12, 1929, Yale lost to Georgia in Sanford Stadium in its first trip to the south. Wade's Alabama again won a national championship and Rose Bowl in 1930. Coaches of the era. Glenn "Pop" Warner. Glenn "Pop" Warner coached at several schools throughout his career, including the University of Georgia, Cornell University, University of Pittsburgh, Stanford University, Iowa State University, and Temple University. One of his most famous stints was at the Carlisle Indian Industrial School, where he coached Jim Thorpe, who went on to become the first president of the National Football League, an Olympic Gold Medalist, and is widely considered one of the best overall athletes in history. Warner wrote one of the first important books of football strategy, "Football for Coaches and Players", published in 1927. Though the shift was invented by Stagg, Warner's single wing and double wing formations greatly improved upon it; for almost 40 years, these were among the most important formations in football. As part of his single and double wing formations, Warner was one of the first coaches to effectively use the forward pass. Among his other innovations are modern blocking schemes, the three-point stance, and the reverse play. The youth football league, Pop Warner Little Scholars, was named in his honor. Knute Rockne. Knute Rockne rose to prominence in 1913 as an end for the University of Notre Dame, then a largely unknown Midwestern Catholic school. When Army scheduled Notre Dame as a warm-up game, they thought little of the small school. Rockne and quarterback Gus Dorais made innovative use of the forward pass, still at that point a relatively unused weapon, to defeat Army 35–13 and helped establish the school as a national power. Rockne returned to coach the team in 1918, and devised the powerful Notre Dame Box offense, based on Warner's single wing. He is credited with being the first major coach to emphasize offense over defense. Rockne is also credited with popularizing and perfecting the forward pass, a seldom used play at the time. The 1924 team featured the Four Horsemen backfield. In 1927, his complex shifts led directly to a rule change whereby all offensive players had to stop for a full second before the ball could be snapped. Rather than simply a regional team, Rockne's "Fighting Irish" became famous for barnstorming and played any team at any location. It was during Rockne's tenure that the annual Notre Dame-University of Southern California rivalry began. He led his team to an impressive 105–12–5 record before his premature death in a plane crash in 1931. He was so famous at that point that his funeral was broadcast nationally on radio. From a regional to a national sport (1930–1958). In the early 1930s, the college game continued to grow, particularly in the South, bolstered by fierce rivalries such as the "South's Oldest Rivalry", between Virginia and North Carolina and the "Deep South's Oldest Rivalry", between Georgia and Auburn. Although before the mid-1920s most national powers came from the Northeast or the Midwest, the trend changed when several teams from the South and the West Coast achieved national success. Wallace William Wade's 1925 Alabama team won the 1926 Rose Bowl after receiving its first national title and William Alexander's 1928 Georgia Tech team defeated California in the 1929 Rose Bowl. College football quickly became the most popular spectator sport in the South. Several major modern college football conferences rose to prominence during this time period. The Southwest Athletic Conference had been founded in 1915. Consisting mostly of schools from Texas, the conference saw back-to-back national champions with Texas Christian University (TCU) in 1938 and Texas A&M in 1939. The Pacific Coast Conference (PCC), a precursor to the Pac-12 Conference (Pac-12), had its own back-to-back champion in the University of Southern California which was awarded the title in 1931 and 1932. The Southeastern Conference (SEC) formed in 1932 and consisted mostly of schools in the Deep South. As in previous decades, the Big Ten continued to dominate in the 1930s and 1940s, with Minnesota winning 5 titles between 1934 and 1941, and Michigan (1933, 1947, and 1948) and Ohio State (1942) also winning titles. As it grew beyond its regional affiliations in the 1930s, college football garnered increased national attention. Four new bowl games were created: the Orange Bowl, Sugar Bowl, the Sun Bowl in 1935, and the Cotton Bowl in 1937. In lieu of an actual national championship, these bowl games, along with the earlier Rose Bowl, provided a way to match up teams from distant regions of the country that did not otherwise play. In 1936, the Associated Press (AP) began its weekly poll of prominent sports writers, ranking all of the nation's college football teams. Since there was no national championship game, the final version of the AP poll was used to determine who was crowned the National Champion of college football. The 1930s saw growth in the passing game. Though some coaches, such as General Robert Neyland at Tennessee, continued to eschew its use, several rules changes to the game had a profound effect on teams' ability to throw the ball. In 1934, the rules committee removed two major penalties—a loss of five yards for a second incomplete pass in any series of downs and a loss of possession for an incomplete pass in the end zone—and shrunk the circumference of the ball, making it easier to grip and throw. Players who became famous for taking advantage of the easier passing game included Alabama end Don Hutson and TCU passer "Slingin" Sammy Baugh. In 1935, New York City's Downtown Athletic Club awarded the first Heisman Trophy to University of Chicago halfback Jay Berwanger, who was also the first ever NFL draft pick in 1936. The trophy was designed by sculptor Frank Eliscu and modeled after New York University player Ed Smith. The trophy recognizes the nation's "most outstanding" college football player and has become one of the most coveted awards in all of American sports. During World War II, college football players enlisted in the armed forces, some playing in Europe during the war. As most of these players had eligibility left on their college careers, some of them returned to college at West Point, bringing Army back-to-back national titles in 1944 and 1945 under coach Red Blaik. Doc Blanchard (known as "Mr. Inside") and Glenn Davis (known as "Mr. Outside") both won the Heisman Trophy, in 1945 and 1946. On the coaching staff of those 1944–1946 Army teams was future Pro Football Hall of Fame coach Vince Lombardi. The 1950s saw the rise of yet more dynasties and power programs. Oklahoma, under coach Bud Wilkinson, won three national titles (1950, 1955, 1956) and all ten Big Eight Conference championships in the decade while building a record 47-game winning streak. Woody Hayes led Ohio State to two national titles, in 1954 and 1957, and won three Big Ten titles. The Michigan State Spartans were known as the "football factory" during the 1950s, where coaches Biggie Munn and Duffy Daugherty led the Spartans to two national titles and two Big Ten titles after joining the Big Ten athletically in 1953. Wilkinson and Hayes, along with Robert Neyland of Tennessee, oversaw a revival of the running game in the 1950s. Passing numbers dropped from an average of 18.9 attempts in 1951 to 13.6 attempts in 1955, while teams averaged just shy of 50 running plays per game. Nine out of ten Heisman Trophy winners in the 1950s were runners. Notre Dame, one of the biggest passing teams of the decade, saw a substantial decline in success; the 1950s were the only decade between 1920 and 1990 when the team did not win at least a share of the national title. Paul Hornung, Notre Dame quarterback, did, however, win the Heisman in 1956, becoming the only player from a losing team ever to do so. The 1956 Sugar Bowl also gained international attention when Georgia's pro-segregationist Gov. Griffin publicly threatened Georgia Tech and its President Blake Van Leer over allowing the first African American player to play in a collegiate bowl game in the south. Modern college football (since 1958). Following the enormous success of the 1958 NFL Championship Game, college football no longer enjoyed the same popularity as the NFL, at least on a national level. While both games benefited from the advent of television, since the late 1950s, the NFL has become a nationally popular sport while college football has maintained strong regional ties. As professional football became a national television phenomenon, college football did as well. In the 1950s, Notre Dame, which had a large national following, formed its own network to broadcast its games, but by and large the sport still retained a mostly regional following. In 1952, the NCAA claimed all television broadcasting rights for the games of its member institutions, and it alone negotiated television rights. This situation continued until 1984, when several schools brought a suit under the Sherman Antitrust Act; the Supreme Court ruled against the NCAA and schools are now free to negotiate their own television deals. ABC Sports began broadcasting a national Game of the Week in 1966, bringing key matchups and rivalries to a national audience for the first time. New formations and play sets continued to be developed. Emory Bellard, an assistant coach under Darrell Royal at the University of Texas, developed a three-back option style offense known as the wishbone. The wishbone is a run-heavy offense that depends on the quarterback making last second decisions on when and to whom to hand or pitch the ball to. Royal went on to teach the offense to other coaches, including Bear Bryant at Alabama, Chuck Fairbanks at Oklahoma and Pepper Rodgers at UCLA; who all adapted and developed it to their own tastes. The strategic opposite of the wishbone is the spread offense, developed by professional and college coaches throughout the 1960s and 1970s. Though some schools play a run-based version of the spread, its most common use is as a passing offense designed to "spread" the field both horizontally and vertically. Some teams have managed to adapt with the times to keep winning consistently. In the rankings of the most victorious programs, Michigan, Ohio State, and Alabama ranked first, second, and third in total wins. Growth of bowl games. In 1940, for the highest level of college football, there were only five bowl games (Rose, Orange, Sugar, Sun, and Cotton). By 1950, three more had joined that number and in 1970, there were still only eight major college bowl games. The number grew to eleven in 1976. At the birth of cable television and cable sports networks like ESPN, there were fifteen bowls in 1980. With more national venues and increased available revenue, the bowls saw an explosive growth throughout the 1980s and 1990s. In the thirty years from 1950 to 1980, seven bowl games were added to the schedule. From 1980 to 2008, an additional 20 bowl games were added to the schedule. Some have criticized this growth, claiming that the increased number of games has diluted the significance of playing in a bowl game. Yet others have countered that the increased number of games has increased exposure and revenue for a greater number of schools, and see it as a positive development. Teams participating in bowl games also get to practice up to four hours per day or 20 hours per week until their bowl game concludes. There is no limit on the number of practices during the bowl season, so teams that play later in the season (usually ones with more wins) get more opportunity to practice than ones that play earlier. This bowl practice period can be compared to the spring practice schedule when teams can have 15 on-field practice sessions. Many teams that play late in the bowl season use the first few practices for evaluation and development of younger players while resting the starters. Determination of national champion. Currently, the NCAA Division I football teams are divided into two divisions – the "football bowl subdivision" (FBS) and the "football championship subdivision"(FCS). As indicated by the name, the FBS teams are eligible to play in post-season bowls. The FCS teams, Division II, Division III, National Junior College teams play in sanctioned tournaments to determine their annual champions. There is not now, and never has been, an NCAA-sanctioned tournament to determine the champion of the top-level football teams. With the growth of bowl games, it became difficult to determine a national champion in a fair and equitable manner. As conferences became contractually bound to certain bowl games (a situation known as a tie-in), match-ups that guaranteed a consensus national champion became increasingly rare. Bowl Coalition. In 1992, seven conferences and independent Notre Dame formed the Bowl Coalition, which attempted to arrange an annual No. 1 versus No. 2 matchup based on the final AP poll standings. The Coalition lasted for three years; however, several scheduling issues prevented much success; tie-ins still took precedence in several cases. For example, the Big Eight and SEC champions could never meet, since they were contractually bound to different bowl games. The coalition also excluded the Rose Bowl, arguably the most prestigious game in the nation, and two major conferences—the Pac-10 and Big Ten—meaning that it had limited success. Bowl Alliance. In 1995, the Coalition was replaced by the Bowl Alliance, which reduced the number of bowl games to host a national championship game to three—the Fiesta, Sugar, and Orange Bowls—and the participating conferences to five—the ACC, SEC, Southwest, Big Eight, and Big East. It was agreed that the No.1 and No.2 ranked teams gave up their prior bowl tie-ins and were guaranteed to meet in the national championship game, which rotated between the three participating bowls. The system still did not include the Big Ten, Pac-10, or the Rose Bowl, and thus still lacked the legitimacy of a true national championship. However, one positive side effect is that if there were three teams at the end of the season vying for a national title, but one of them was a Pac-10/Big Ten team bound to the Rose Bowl, then there would be no difficulty in deciding which teams to place in the Bowl Alliance "national championship" bowl; if the Pac-10 / Big Ten team won the Rose Bowl and finished with the same record as whichever team won the other bowl game, they could have a share of the national title. This happened in the final year of the Bowl Alliance, with Michigan winning the 1998 Rose Bowl and Nebraska winning the 1998 Orange Bowl. Without the Pac-10/Big Ten team bound to a bowl game, it would be difficult to decide which two teams should play for the national title. Bowl Championship Series. In 1998, a new system was put into place called the Bowl Championship Series. For the first time, it included all major conferences (ACC, Big East, Big 12, Big Ten, Pac-10, and SEC) and four major bowl games (Rose, Orange, Sugar and Fiesta). The champions of these six conferences, along with two "at-large" selections, were invited to play in the four bowl games. Each year, one of the four bowl games served as a national championship game. Also, a complex system of human polls, computer rankings, and strength of schedule calculations was instituted to rank schools. Based on this ranking system, the No.1 and No.2 teams met each year in the national championship game. Traditional tie-ins were maintained for schools and bowls not part of the national championship. For example, in years when not a part of the national championship, the Rose Bowl still hosted the Big Ten and Pac-10 champions. The system continued to change, as the formula for ranking teams was tweaked from year to year. At-large teams could be chosen from any of the Division I-A conferences, though only one selection—Utah in 2005—came from a BCS non-AQ conference. Starting with the 2006 season, a fifth game—simply called the BCS National Championship Game—was added to the schedule, to be played at the site of one of the four BCS bowl games on a rotating basis, one week after the regular bowl game. This opened up the BCS to two additional at-large teams. Also, rules were changed to add the champions of five additional conferences (Conference USA [C-USA], the Mid-American Conference [MAC], the Mountain West Conference [MW], the Sun Belt Conference and the Western Athletic Conference [WAC]), provided that said champion ranked in the top twelve in the final BCS rankings, or was within the top 16 of the BCS rankings and ranked higher than the champion of at least one of the BCS Automatic Qualifying (AQ) conferences. Several times since this rule change was implemented, schools from non-AQ conferences have played in BCS bowl games. In 2009, Boise State played TCU in the Fiesta Bowl, the first time two schools from non-AQ conferences played each other in a BCS bowl game. The last team from the non-AQ ranks to reach a BCS bowl game in the BCS era was Northern Illinois in 2012, which played in (and lost) the 2013 Orange Bowl. College Football Playoff. The longtime resistance to a playoff system at the FBS level finally ended with the creation of the College Football Playoff (CFP) beginning with the 2014 season. The CFP is a multi-team single-elimination tournament (originally four teams; expanded to 12 teams in the 2024 season) whose participants are chosen and seeded by a selection committee. Originally, the semifinals were hosted by two of the group of traditional bowl games known as the New Year's Six, with hosts rotating in a three-year cycle. In the current format, the first round is held at campus sites, with the quarterfinals and semifinals hosted by New Year's Six bowls. In both formats, semifinal winners advance to the College Football Playoff National Championship, whose host is determined by open bidding several years in advance. The 10 FBS conferences are formally and popularly divided into two groups: Official rules and notable rule distinctions. Although rules for the high school, college, and NFL games are generally consistent, there are several minor differences. Before 2023, a single NCAA Football Rules Committee determined the playing rules for Division I (both Bowl and Championship Subdivisions), II, and III games (the National Association of Intercollegiate Athletics (NAIA) is a separate organization, but uses the NCAA rules). As part of an NCAA initiative to give each division more autonomy over its governance, separate rules committees have been established for each NCAA division. Organization. College teams mostly play other similarly sized schools through the NCAA's divisional system. Division I generally consists of the major collegiate athletic powers with larger budgets, more elaborate facilities, and (with the exception of a few conferences such as the Pioneer Football League) more athletic scholarships. Division II primarily consists of smaller public and private institutions that offer fewer scholarships than those in Division I. Division III institutions also field teams, but do not offer any scholarships. Football teams in Division I are further divided into the Bowl Subdivision (consisting of the largest programs) and the Championship Subdivision. The Bowl Subdivision has historically not used an organized tournament to determine its champion, and instead teams compete in post-season bowl games. That changed with the debut of the four-team College Football Playoff at the end of the 2014 season, However, the NCAA does not operate that tournament, and its winner is not automatically crowned National Champion. Teams in each of these four divisions are further divided into various regional conferences. Several organizations operate college football programs outside the jurisdiction of the NCAA: A college that fields a team in the NCAA is not restricted from fielding teams in club or sprint football, and several colleges field two teams, a varsity (NCAA) squad and a club or sprint squad (no schools, , field both club "and" sprint teams at the same time). Playoff games. Starting in the 2014 season, four Division I FBS teams are selected at the end of regular season to compete in a playoff for the FBS national championship. The inaugural champion was Ohio State University. The College Football Playoff replaced the Bowl Championship Series, which had been used as a selection method to determine the national championship game participants since in the 1998 season. The Ohio State Buckeyes won the most recent playoff 34–23 over the Notre Dame Fighting Irish in the 2025 College Football Playoff. At the Division I FCS level, the teams participate in a 24-team playoff (most recently expanded from 20 teams in 2013) to determine the national championship. Under the current playoff structure, the top eight teams are all seeded, and receive a bye week in the first round. The highest seed receives automatic home field advantage. Starting in 2013, non-seeded teams can only host a playoff game if both teams involved are unseeded; in such a matchup, the schools must bid for the right to host the game. Selection for the playoffs is determined by a selection committee, although usually a team must have an 8–4 record to even be considered. Losses to an FBS team count against their playoff eligibility, while wins against a Division II opponent do not count towards playoff consideration. Thus, only Division I wins (whether FBS, FCS, or FCS non-scholarship) are considered for playoff selection. The Division I National Championship game is held in Frisco, Texas. Division II and Division III of the NCAA also participate in their own respective playoffs, crowning national champions at the end of the season. The National Association of Intercollegiate Athletics also holds a playoff. Bowl games. Unlike other college football divisions and most other sports—collegiate or professional—the Football Bowl Subdivision, formerly known as Division I-A college football, has historically not employed a playoff system to determine a champion. Instead, it has a series of postseason "bowl games". The annual National Champion in the Football Bowl Subdivision is then instead traditionally determined by a vote of sports writers and other non-players. This system has been challenged often, beginning with an NCAA committee proposal in 1979 to have a four-team playoff following the bowl games. However, little headway was made in instituting a playoff tournament until 2014, given the entrenched vested economic interests in the various bowls. Although the NCAA publishes lists of claimed FBS-level national champions in its official publications, it has never recognized an official FBS national championship; this policy continues even after the establishment of the College Football Playoff (which is not directly run by the NCAA) in 2014. As a result, the official Division I National Champion is the winner of the Football Championship Subdivision, as it is the highest level of football with an NCAA-administered championship tournament. (This also means that FBS student-athletes are the only NCAA athletes who are ineligible for the Elite 90 Award, an academic award presented to the upper class player with the highest grade-point average among the teams that advance to the championship final site.) The first bowl game was the 1902 Rose Bowl, played between Michigan and Stanford; Michigan won 49–0. It ended when Stanford requested and Michigan agreed to end it with 8 minutes on the clock. That game was so lopsided that the game was not played annually until 1916, when the Tournament of Roses decided to reattempt the postseason game. The term "bowl" originates from the shape of the Rose Bowl stadium in Pasadena, California, which was built in 1923 and resembled the Yale Bowl, built in 1915. This is where the name came into use, as it became known as the Rose Bowl Game. Other games came along and used the term "bowl", whether the stadium was shaped like a bowl or not. At the Division I FBS level, teams must earn the right to be bowl eligible by winning at least 6 games during the season (teams that play 13 games in a season, which is allowed for Hawaii and any of its home opponents, must win 7 games). They are then invited to a bowl game based on their conference ranking and the tie-ins that the conference has to each bowl game. For the 2009 season, there were 34 bowl games, so 68 of the 120 Division I FBS teams were invited to play at a bowl. These games are played from mid-December to early January and most of the later bowl games are typically considered more prestigious. After the Bowl Championship Series, additional all-star bowl games round out the post-season schedule through the beginning of February. Division I FBS National Championship Games. Partly as a compromise between both bowl game and playoff supporters, the NCAA created the Bowl Championship Series (BCS) in 1998 to create a definitive national championship game for college football. The series included the four most prominent bowl games (Rose Bowl, Orange Bowl, Sugar Bowl, Fiesta Bowl), while the national championship game rotated each year between one of these venues. The BCS system was slightly adjusted in 2006, as the NCAA added a fifth game to the series, called the National Championship Game. This allowed the four other BCS bowls to use their normal selection process to select the teams in their games while the top two teams in the BCS rankings would play in the new National Championship Game. The BCS selection committee used a complicated, and often controversial, computer system to rank all Division I-FBS teams and the top two teams at the end of the season played for the national championship. This computer system, which factored in newspaper polls, online polls, coaches' polls, strength of schedule, and various other factors of a team's season, led to much dispute over whether the two best teams in the country were being selected to play in the National Championship Game. The BCS ended after the 2013 season and, since the 2014 season, the FBS national champion has been determined by a four-team tournament known as the College Football Playoff (CFP). A selection committee of college football experts decides the participating teams. Six major bowl games known as the New Year's Six (NY6)—the Rose, Sugar, Cotton, Orange, Peach, and Fiesta Bowls—rotate on a three-year cycle as semi-final games, with the winners advancing to the College Football Playoff National Championship. This arrangement was contractually locked in until the 2026 season, but an agreement was reached on CFP expansion to 12 teams effective with the 2024 season. In the new CFP format, no conferences will receive automatic bids. Playoff berths will be awarded to the top six conference champions in the CFP rankings, plus the top six remaining teams (which may include other conference champions). The top four conference champions receive first-round byes. All first-round games will be played at the home field of the higher seed. The winners of these games advance to meet the top four seeds in the quarterfinals. The NY6 games will host the quarterfinals and semi-finals, rotating so that each bowl game will host two quarterfinals and one semi-final in a three-year cycle. The CFP National Championship will continue to be held at a site determined by open bidding several years in advance. Controversy. College football is a controversial institution within American higher education, where the amount of money involved—what people will pay for the entertainment provided—is a corrupting factor within universities that they are usually ill-equipped to deal with. According to William E. Kirwan, chancellor of the University of Maryland System and co-director of the Knight Commission on Intercollegiate Athletics, "We've reached a point where big-time intercollegiate athletics is undermining the integrity of our institutions, diverting presidents and institutions from their main purpose." Football coaches often make more than the presidents of the universities which employ them. Athletes are alleged to receive preferential treatment both in academics and when they run afoul of the law. Although in theory football is an extra-curricular activity engaged in as a sideline by students, it is widely believed to turn a substantial profit, from which the athletes receive no direct benefit. There has been serious discussion about making student-athletes university employees to allow them to be paid. In reality, the majority of major collegiate football programs operated at a financial loss in 2014. There had been discussions on changing rules that prohibited compensation for the use of a player's name, image, and likeness (NIL), but change did not start to come until the mid-2010s. This reform first took place in the NAIA, which initially allowed all student-athletes at its member schools to receive NIL compensation in 2014, and beginning in 2020 specifically allowed these individuals to reference their athletic participation in their endorsement deals. The NCAA passed its own NIL reform, very similar to the NAIA's most recent reform, in July 2021, after its hand was forced by multiple states that had passed legislation allowing NIL compensation, most notably California. On June 3 of 2021, "The NCAA's board of directors adopted a temporary rule change that opened the door for NIL activity, instructing schools to set their own policy for what should be allowed with minimal guidelines" (Murphy 2021). On July 1 of 2021, the new rules set in and student athletes could start signing endorsements using their name, image and likeness. "The NCAA has asked Congress for help in creating a federal NIL law. While several federal options have been proposed, it's becoming increasingly likely that state laws will start to go into effect before a nationwide change is made. There are 28 states with NIL laws already in place and multiple others that are actively pursuing legislation" (Murphy 2021). Charlie Baker called for a ban on all college football betting (and betting on college sports in general) because of prop bets for student athletes. With past scandals and threats to college athletes, Baker requested states with sports betting to adjust their regulations to remove these bet types. While some were quick to do so (including Louisiana, Colorado, Ohio), others rejected the notion and continued to offer sports betting the same way. College football outside the United States. Canadian football, which parallels American football, is played by university teams in Canada under the auspices of U Sports. (Unlike in the United States, no junior colleges play football in Canada, and the sanctioning body for junior college athletics in Canada, CCAA, does not sanction the sport.) However, amateur football outside of colleges is played in Canada, such as in the Canadian Junior Football League. Organized competition in American football also exists at the collegiate level in Mexico (ONEFA), the UK (British Universities American Football League), Japan (Japan American Football Association, Koshien Bowl), and South Korea (Korea American Football Association). Injuries. According to 2017 study on brains of deceased gridiron football players, 99% of tested brains of NFL players, 88% of CFL players, 64% of semi-professional players, 91% of college football players, and 21% of high school football players had various stages of CTE. The study noted it has limitations due to "selection bias" in that the brains donated are from families who suspected CTE, but "The fact that we were able to gather so many instances of a disease that was previously considered quite rare, in eight years, speaks volumes." Other common injuries include: injuries of legs, arms, and lower back.
6773
27823944
https://en.wikipedia.org/wiki?curid=6773
Ciprofloxacin
Ciprofloxacin is a fluoroquinolone antibiotic used to treat a number of bacterial infections. This includes bone and joint infections, intra-abdominal infections, certain types of infectious diarrhea, respiratory tract infections, skin infections, typhoid fever, and urinary tract infections, among others. For some infections it is used in addition to other antibiotics. It can be taken by mouth, as eye drops, as ear drops, or intravenously. Common side effects include nausea, vomiting, and diarrhea. Severe side effects include tendon rupture, hallucinations, and nerve damage. In people with myasthenia gravis, there is worsening muscle weakness. Rates of side effects appear to be higher than some groups of antibiotics such as cephalosporins but lower than others such as clindamycin. Studies in other animals raise concerns regarding use in pregnancy. No problems were identified, however, in the children of a small number of women who took the medication. It appears to be safe during breastfeeding. It is a second-generation fluoroquinolone with a broad spectrum of activity that usually results in the death of the bacteria. Ciprofloxacin was patented in 1980 and introduced by Bayer in 1987. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ciprofloxacin as critically important for human medicine. It is available as a generic medication. In 2022, it was the 181st most commonly prescribed medication in the United States, with more than 2million prescriptions. Medical uses. Ciprofloxacin is used to treat a wide variety of infections, including infections of bones and joints, endocarditis, bacterial gastroenteritis, malignant otitis externa, bubonic plague, respiratory tract infections, cellulitis, urinary tract infections, prostatitis, anthrax, and chancroid. Ciprofloxacin occupies an important role in treatment guidelines issued by major medical societies for the treatment of serious infections, especially those likely to be caused by Gram-negative bacteria, including "Pseudomonas aeruginosa". For example, ciprofloxacin in combination with metronidazole is one of several first-line antibiotic regimens recommended by the Infectious Diseases Society of America for the treatment of community-acquired abdominal infections in adults. It also features prominently in treatment guidelines for acute pyelonephritis, complicated or hospital-acquired urinary tract infection, acute or chronic prostatitis, certain types of endocarditis, certain skin infections, and prosthetic joint infections. In other cases, treatment guidelines are more restrictive, recommending in most cases that older, narrower-spectrum drugs be used as first-line therapy for less severe infections to minimize fluoroquinolone-resistance development. For example, the Infectious Diseases Society of America recommends the use of ciprofloxacin and other fluoroquinolones in urinary tract infections be reserved to cases of proven or expected resistance to narrower-spectrum drugs such as nitrofurantoin or trimethoprim/sulfamethoxazole. The European Association of Urology recommends ciprofloxacin as an alternative regimen for the treatment of uncomplicated urinary tract infections, but cautions that the potential for "adverse events have to be considered". Although approved by regulatory authorities for the treatment of respiratory infections, ciprofloxacin is not recommended for respiratory infections by most treatment guidelines due in part to its modest activity against the common respiratory pathogen "Streptococcus pneumoniae". "Respiratory quinolones" such as levofloxacin, having greater activity against this pathogen, are recommended as first line agents for the treatment of community-acquired pneumonia in patients with important co-morbidities and in patients requiring hospitalization (Infectious Diseases Society of America 2007). Similarly, ciprofloxacin is not recommended as a first-line treatment for acute sinusitis. Ciprofloxacin is approved for the treatment of gonorrhea in many countries, but this recommendation is widely regarded as obsolete due to resistance development. Pregnancy. An expert review of published data on experiences with ciprofloxacin use during pregnancy concluded therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk (quantity and quality of data=fair), but the data is insufficient to state that no risks exist. Exposure to quinolones, including levofloxacin, during the first-trimester is not associated with an increased risk of stillbirths, premature births, birth defects, or low birth weight. Two small post-marketing epidemiology studies of mostly short-term, first-trimester exposure found that fluoroquinolones did not increase risk of major malformations, spontaneous abortions, premature birth, or low birth weight. Breastfeeding. Fluoroquinolones have been reported as present in a mother's milk and thus passed on to the nursing child. Children. Oral and intravenous ciprofloxacin are approved by the FDA for use in children for only two indications due to the risk of permanent injury to the musculoskeletal system: Spectrum of activity. Its spectrum of activity includes most strains of bacterial pathogens responsible for community-acquired pneumonias, bronchitis, urinary tract infections, and gastroenteritis. Ciprofloxacin is particularly effective against Gram-negative bacteria (such as "Escherichia coli", "Haemophilus influenzae", "Klebsiella pneumoniae", "Legionella pneumophila", "Moraxella catarrhalis", "Proteus mirabilis", and "Pseudomonas aeruginosa"), but is less effective against Gram-positive bacteria (such as methicillin-sensitive "Staphylococcus aureus", "Streptococcus pneumoniae", and "Enterococcus faecalis") than newer fluoroquinolones. Bacterial resistance. As a result of its widespread use to treat minor infections readily treatable with older, narrower-spectrum antibiotics, many bacteria have developed resistance to this drug, leaving it significantly less effective than it would have been otherwise. Resistance to ciprofloxacin and other fluoroquinolones may evolve rapidly, even during a course of treatment. Numerous pathogens, including enterococci, "Streptococcus pyogenes", and "Klebsiella pneumoniae" (quinolone-resistant) now exhibit resistance. Widespread veterinary usage of fluoroquinolones, particularly in Europe, has been implicated. Meanwhile, some "Burkholderia cepacia", "Clostridium innocuum", and "Enterococcus faecium" strains have developed resistance to ciprofloxacin to varying degrees. Fluoroquinolones had become the class of antibiotics most commonly prescribed to adults in 2002. Nearly half (42%) of those prescriptions in the US were for conditions not approved by the FDA, such as acute bronchitis, otitis media, and acute upper respiratory tract infection. Contraindications. Contraindications include: Ciprofloxacin is also considered to be contraindicated in children (except for the indications outlined above), in pregnancy, to nursing mothers, and in people with epilepsy or other seizure disorders. Caution may be required in people with Marfan syndrome or Ehlers–Danlos syndrome. Adverse effects. Adverse effects can involve the tendons, muscles, joints, nerves, and the central nervous system. Rates of adverse effects appear to be higher than with some groups of antibiotics such as cephalosporins but lower than with others such as clindamycin. Compared to other antibiotics some studies find a higher rate of adverse effects while others find no difference. In clinical trials most of the adverse events were described as mild or moderate in severity, abated soon after the drug was discontinued, and required no treatment. Some adverse effects may be permanent. Ciprofloxacin was stopped because of an adverse event in 1% of people treated with the medication by mouth. The most frequently reported drug-related events, from trials of all formulations, all dosages, all drug-therapy durations, and for all indications, were nausea (2.5%), diarrhea (1.6%), abnormal liver function tests (1.3%), vomiting (1%), and rash (1%). Other adverse events occurred at rates of <1%. Tendon problems. Ciprofloxacin includes a boxed warning in the United States due to an increased risk of tendinitis and tendon rupture, especially in people who are older than 60 years, people who also use corticosteroids, and people with kidney, lung, or heart transplants. Tendon rupture can occur during therapy or even months after discontinuation of the medication. One study found that fluoroquinolone use was associated with a 1.9-fold increase in tendon problems. The risk increased to 3.2 in those over 60 years of age and to 6.2 in those over the age of 60 who were also taking corticosteroids. Among the 46,766 quinolone users in the study, 38 (0.08%) cases of Achilles tendon rupture were identified. Cardiac arrhythmia. The fluoroquinolones, including ciprofloxacin, are associated with an increased risk of cardiac toxicity, including QT interval prolongation, "torsades de pointes", ventricular arrhythmia, and sudden death. Nervous system. Because Ciprofloxacin is lipophilic, it has the ability to cross the blood–brain barrier. The 2013 FDA label warns of nervous system effects. Ciprofloxacin, like other fluoroquinolones, is known to trigger seizures or lower the seizure threshold, and may cause other central nervous system adverse effects. Headache, dizziness, and insomnia have been reported as occurring fairly commonly in postapproval review articles, along with a much lower incidence of serious CNS adverse effects such as tremors, psychosis, anxiety, hallucinations, paranoia, and suicide attempts, especially at higher doses. Like other fluoroquinolones, it is also known to cause peripheral neuropathy that may be irreversible, such as weakness, burning pain, tingling or numbness. Fluoroquinolones have already been reported for movement disorders. In this context, ciprofloxacin is especially associated with myoclonus, which derives the term "ciproclonus." Cancer. Ciprofloxacin is active in six of eight "in vitro" assays used as rapid screens for the detection of genotoxic effects, but is not active in "in vivo" assays of genotoxicity. Long-term carcinogenicity studies in rats and mice resulted in no carcinogenic or tumorigenic effects due to ciprofloxacin at daily oral dose levels up to 250 and 750 mg/kg to rats and mice, respectively (about 1.7 and 2.5 times the highest recommended therapeutic dose based upon mg/m2). Results from photo co-carcinogenicity testing indicate ciprofloxacin does not reduce the time to appearance of UV-induced skin tumors as compared to vehicle control. Other. The other black box warning is that ciprofloxacin should not be used in people with myasthenia gravis due to possible exacerbation of muscle weakness which may lead to breathing problems resulting in death or ventilator support. Fluoroquinolones are known to block neuromuscular transmission. There are concerns that fluoroquinolones including ciprofloxacin can affect cartilage in young children. "Clostridioides difficile"-associated diarrhea is a serious adverse effect of ciprofloxacin and other fluoroquinolones; it is unclear whether the risk is higher than with other broad-spectrum antibiotics. A wide range of rare but potentially fatal adverse effects reported to the US FDA or the subject of case reports includes aortic dissection, toxic epidermal necrolysis, Stevens–Johnson syndrome, low blood pressure, allergic pneumonitis, bone marrow suppression, hepatitis or liver failure, and sensitivity to light. The medication should be discontinued if a rash, jaundice, or other sign of hypersensitivity occurs. Children and the elderly are at a much greater risk of experiencing adverse reactions. Overdose. Overdose of ciprofloxacin may result in reversible renal toxicity. Treatment of overdose includes emptying of the stomach by induced vomiting or gastric lavage, as well as administration of antacids containing magnesium, aluminium, or calcium to reduce drug absorption. Renal function and urinary pH should be monitored. Important support includes adequate hydration and urine acidification if necessary to prevent crystalluria. Hemodialysis or peritoneal dialysis can only remove less than 10% of ciprofloxacin. Ciprofloxacin may be quantified in plasma or serum to monitor for drug accumulation in patients with hepatic dysfunction or to confirm a diagnosis of poisoning in acute overdose victims. Interactions. Ciprofloxacin interacts with certain foods and several other drugs leading to undesirable increases or decreases in the serum levels or distribution of one or both drugs. Ciprofloxacin should not be taken with antacids containing magnesium or aluminum, highly buffered drugs (sevelamer, lanthanum carbonate, sucralfate, didanosine), or with supplements containing calcium, iron, or zinc. It should be taken two hours before or six hours after these products. Magnesium or aluminum antacids turn ciprofloxacin into insoluble salts that are not readily absorbed by the intestinal tract, reducing peak serum concentrations by 90% or more, leading to therapeutic failure. Additionally, it should not be taken with dairy products or calcium-fortified juices alone, as peak serum concentration and the area under the serum concentration-time curve can be reduced up to 40%. However, ciprofloxacin may be taken with dairy products or calcium-fortified juices as part of a meal. Ciprofloxacin inhibits the drug-metabolizing enzyme CYP1A2 and thereby can reduce the clearance of drugs metabolized by that enzyme. CYP1A2 substrates that exhibit increased serum levels in ciprofloxacin-treated patients include tizanidine, theophylline, caffeine, methylxanthines, clozapine, olanzapine, and ropinirole. Co-administration of ciprofloxacin with the CYP1A2 substrate tizanidine (Zanaflex) is contraindicated due to a 583% increase in the peak serum concentrations of tizanidine when administered with ciprofloxacin as compared to administration of tizanidine alone. Use of ciprofloxacin is cautioned in patients on theophylline due to its narrow therapeutic index. The authors of one review recommended that patients being treated with ciprofloxacin reduce their caffeine intake. Evidence for significant interactions with several other CYP1A2 substrates such as cyclosporine is equivocal or conflicting. The Committee on Safety of Medicines and the FDA warn that central nervous system adverse effects, including seizure risk, may be increased when NSAIDs are combined with quinolones. The mechanism for this interaction may involve a synergistic increased antagonism of GABA neurotransmission. Altered serum levels of the antiepileptic drugs phenytoin and carbamazepine (increased and decreased) have been reported in patients receiving concomitant ciprofloxacin. Ciprofloxacin is a potent inhibitor of CYP1A2, CYP2D6, and CYP3A4. Mechanism of action. Ciprofloxacin is a broad-spectrum antibiotic of the fluoroquinolone class. It is active against some Gram-positive and many Gram-negative bacteria. It functions by inhibiting a type II topoisomerase (DNA gyrase) and topoisomerase IV, necessary to separate bacterial DNA, thereby inhibiting cell division. Bacterial DNA fragmentation will occur as a result of inhibition of the enzymes. Pharmacokinetics. Ciprofloxacin for systemic administration is available as immediate-release tablets, extended-release tablets, an oral suspension, and as a solution for intravenous administration. When administered over one hour as an intravenous infusion, ciprofloxacin rapidly distributes into the tissues, with levels in some tissues exceeding those in the serum. Penetration into the central nervous system is relatively modest, with cerebrospinal fluid levels normally less than 10% of peak serum concentrations. The serum half-life of ciprofloxacin is about 4–6 hours, with 50–70% of an administered dose being excreted in the urine as unmetabolized drug. An additional 10% is excreted in urine as metabolites. Urinary excretion is virtually complete 24 hours after administration. Dose adjustment is required in the elderly and in those with renal impairment. Ciprofloxacin is weakly bound to serum proteins (20–40%). It is an inhibitor of the drug-metabolizing enzyme cytochrome P450 1A2, which leads to the potential for clinically important drug interactions with drugs metabolized by that enzyme. Ciprofloxacin is about 70% available when administered orally. The extended release tablets allow once-daily administration by releasing the drug more slowly in the gastrointestinal tract. These tablets contain 35% of the administered dose in an immediate-release form and 65% in a slow-release matrix. Maximum serum concentrations are achieved between 1 and 4 hours after administration. Compared to the 250- and 500-mg immediate-release tablets, the 500-mg and 1000-mg XR tablets provide higher Cmax, but the 24‑hour AUCs are equivalent. Ciprofloxacin immediate-release tablets contain ciprofloxacin as the hydrochloride salt, and the XR tablets contain a mixture of the hydrochloride salt and the free base. Chemical properties. Ciprofloxacin is 1-cyclopropyl-6-fluoro-1,4-dihydro-4-oxo-7-(1-piperazinyl)-3-quinolinecarboxylic acid. Its empirical formula is C17H18FN3O3 and its molecular weight is 331.4 g/mol. It is a faintly yellowish to light yellow crystalline substance. Ciprofloxacin hydrochloride (USP) is the monohydrochloride monohydrate salt of ciprofloxacin. It is a faintly yellowish to light yellow crystalline substance with a molecular weight of 385.8 g/mol. Its empirical formula is C17H18FN3O3HCl•H2O. Usage. Ciprofloxacin is the most widely used of the second-generation quinolones. In 2010, over 20 million prescriptions were written, making it the 35th-most-commonly prescribed generic drug and the 5th-most-commonly prescribed antibacterial in the US. History. The first members of the quinolone antibacterial class were relatively low-potency drugs such as nalidixic acid, used mainly in the treatment of urinary tract infections owing to their renal excretion and propensity to be concentrated in urine. In 1979, the publication of a patent filed by the pharmaceutical arm of Kyorin Seiyaku Kabushiki Kaisha disclosed the discovery of norfloxacin, and the demonstration that certain structural modifications including the attachment of a fluorine atom to the quinolone ring leads to dramatically enhanced antibacterial potency. In the aftermath of this disclosure, several other pharmaceutical companies initiated research and development programs with the goal of discovering additional antibacterial agents of the fluoroquinolone class. The fluoroquinolone program at Bayer focused on examining the effects of very minor changes to the norfloxacin structure. In 1983, the company published "in vitro" potency data for ciprofloxacin, a fluoroquinolone antibacterial having a chemical structure differing from that of norfloxacin by the presence of a single carbon atom. This small change led to a two- to 10-fold increase in potency against most strains of Gram-negative bacteria. Importantly, this structural change led to a four-fold improvement in activity against the important Gram-negative pathogen "Pseudomonas aeruginosa", making ciprofloxacin one of the most potent known drugs for the treatment of this intrinsically antibiotic-resistant pathogen. The oral tablet form of ciprofloxacin was approved in October 1987, just one year after the approval of norfloxacin. In 1991, the intravenous formulation was introduced. Ciprofloxacin sales reached a peak of about 2 billion euros in 2001, before Bayer's patent expired in 2004, after which annual sales have averaged around €200 million. The name probably originates from the International Scientific Nomenclature: ci- (alteration of cycl-) + propyl + fluor- + ox- + az- + -mycin. Society and culture. Economics. It is available as a generic medication and not very expensive. Generic equivalents. In October 2001, the Prescription Access Litigation (PAL) project filed suit to dissolve an agreement between Bayer and three of its competitors which produced generic versions of drugs (Barr Laboratories, Rugby Laboratories, and Hoechst-Marion-Roussel) that PAL claimed was blocking access to adequate supplies and cheaper, generic versions of ciprofloxacin. The plaintiffs charged that Bayer Corporation, a unit of Bayer AG, had unlawfully paid the three competing companies a total of $200 million to prevent cheaper, generic versions of ciprofloxacin from being brought to the market, as well as manipulating its price and supply. Numerous other consumer advocacy groups joined the lawsuit. On 15 October 2008, five years after Bayer's patent had expired, the United States District Court for the Eastern District of New York granted Bayer's and the other defendants' motion for summary judgment, holding that any anticompetitive effects caused by the settlement agreements between Bayer and its codefendants were within the exclusionary zone of the patent and thus could not be redressed by federal antitrust law, in effect upholding Bayer's agreement with its competitors. Available forms. Ciprofloxacin for systemic administration is available as immediate-release tablets, as extended-release tablets, as an oral suspension, and as a solution for intravenous infusion. It is available for local administration as eye drops and ear drops. It is available in combination with dexamethasone, with celecoxib, with hydrocortisone, and with fluocinolone acetonide. Litigation. A class action was filed against Bayer AG on behalf of employees of the Brentwood Post Office in Washington, D.C., and workers at the U.S. Capitol, along with employees of American Media, Inc. in Florida and postal workers in general who alleged they developed serious adverse effects from taking ciprofloxacin in the aftermath of the anthrax attacks in 2001. The action alleged Bayer failed to warn class members of the potential side effects of the drug, thereby violating the Pennsylvania Unfair Trade Practices and Consumer Protection Laws. The class action was defeated and the litigation abandoned by the plaintiffs. A similar action was filed in 2003 in New Jersey by four New Jersey postal workers but was withdrawn for lack of grounds, as workers had been informed of the risks of ciprofloxacin when they were given the option of taking the drug. Research. As resistance to ciprofloxacin has grown since its introduction, research has been conducted to discover and develop analogs that can be effective against resistant bacteria; some have been looked at in antiviral models as well.
6774
37031437
https://en.wikipedia.org/wiki?curid=6774
Consubstantiation
Consubstantiation is a Christian theological doctrine that (like transubstantiation) describes the real presence of Christ in the Eucharist. It holds that during the sacrament, the substance of the body and blood of Christ are present alongside the substance of the bread and wine, which remain present. It was part of the doctrines of Lollardy, and considered a heresy by the Roman Catholic Church. It was later championed by Edward Pusey of the Oxford Movement, and is therefore held by many high church Anglicans, seemingly contrary to the Black Rubric of the Book of Common Prayer. The Irvingian Churches (such as the New Apostolic Church) adhere to consubstantiation as the explanation of the real presence of Christ in the Eucharist. Development. In England in the late 14th century, there was a political and religious movement known as Lollardy. Among much broader goals, the Lollards affirmed a form of consubstantiation—that the Eucharist remained physically bread and wine, while becoming spiritually the body and blood of Christ. Lollardy survived up until the time of the English Reformation. Whilst ultimately rejected by him on account of the authority of the Church of Rome, William of Ockham entertains a version of consubstantiation in his "Fourth Quodlibet, Question 30", where he claims that "the substance of the bread and the substance of the wine remain there and that the substance of the body of Christ remains in the same place, together with the substance of the bread". Literary critic Kenneth Burke's dramatism takes this concept and utilizes it in secular rhetorical theory to look at the dialectic of unity and difference within the context of logology. The doctrine of consubstantiation is often held in contrast to the doctrine of transubstantiation. To explain the manner of Christ's presence in Holy Communion, many high church Anglicans teach the philosophical explanation of consubstantiation. A major leader in the Anglo-Catholic Oxford Movement, Edward Pusey, championed the view of consubstantiation. Pusey's view is that: The Irvingian Churches adhere to the doctrine of consubstantiation; for example, "The Catechism of the New Apostolic Church" states: The term "consubstantiation" has been used to describe Martin Luther's Eucharistic doctrine, the sacramental union. Lutheran theologians reject the term because it refers to a philosophical construct that they believe differs from the Lutheran doctrine of the sacramental union, denotes a mixing of substances (bread and wine with body and blood), and suggests a "gross, Capernaitic, carnal" presence of the body and blood of Christ.
6775
47784746
https://en.wikipedia.org/wiki?curid=6775
Chlorophyta
Chlorophyta is a division of green algae informally called chlorophytes. Description. Chlorophytes are eukaryotic organisms composed of cells with a variety of coverings or walls, and usually a single green chloroplast in each cell. They are structurally diverse: most groups of chlorophytes are unicellular, such as the earliest-diverging prasinophytes, but in two major classes (Chlorophyceae and Ulvophyceae) there is an evolutionary trend toward various types of complex colonies and even multicellularity. Chloroplasts. Chlorophyte cells contain green chloroplasts surrounded by a double-membrane envelope. These contain chlorophylls "a" and "b", and the carotenoids carotin, lutein, zeaxanthin, antheraxanthin, violaxanthin, and neoxanthin, which are also present in the leaves of land plants. Some special carotenoids are present in certain groups, or are synthesized under specific environmental factors, such as siphonaxanthin, prasinoxanthin, echinenone, canthaxanthin, loroxanthin, and astaxanthin. They accumulate carotenoids under nitrogen deficiency, high irradiance of sunlight, or high salinity. In addition, they store starch inside the chloroplast as carbohydrate reserves. The thylakoids can appear single or in stacks. In contrast to other divisions of algae such as Ochrophyta, chlorophytes lack a chloroplast endoplasmic reticulum. Flagellar apparatus. Chlorophytes often form flagellate cells that generally have two or four flagella of equal length, although in prasinophytes heteromorphic (i.e. differently shaped) flagella are common because different stages of flagellar maturation are displayed in the same cell. Flagella have been independently lost in some groups, such as the Chlorococcales. Flagellate chlorophyte cells have symmetrical cross-shaped ('cruciate') root systems, in which ciliary rootlets with a variable high number of microtubules alternate with rootlets composed of just two microtubules; this forms an arrangement known as the "X-2-X-2" arrangement, unique to chlorophytes. They are also distinguished from streptophytes by the place where their flagella are inserted: directly at the cell apex, whereas streptophyte flagella are inserted at the sides of the cell apex (sub-apically). Below the flagellar apparatus of prasinophytes are rhizoplasts, contractile muscle-like structures that sometimes connect with the chloroplast or the cell membrane. In core chlorophytes, this structure connects directly with the surface of the nucleus. The surface of flagella lacks microtubular hairs, but some genera present scales or fibrillar hairs. The earliest-branching groups have flagella often covered in at least one layer of scales, if not naked. Metabolism. Chlorophytes and streptophytes differ in the enzymes and organelles involved in photorespiration. Chlorophyte algae use a dehydrogenase inside the mitochondria to process glycolate during photorespiration. In contrast, streptophytes (including land plants) use peroxisomes that contain glycolate oxidase, which converts glycolate to glycoxylate, and the hydrogen peroxide created as a subproduct is reduced by catalases located in the same organelles. Reproduction and life cycle. Asexual reproduction is widely observed in chlorophytes. Among core chlorophytes, both unicellular groups can reproduce asexually through autospores, wall-less zoospores, fragmentation, plain cell division, and exceptionally budding. Multicellular thalli can reproduce asexually through motile zoospores, non-motile aplanospores, autospores, filament fragmentation, differentiated resting cells, and even unmated gametes. Colonial groups can reproduce asexually through the formation of autocolonies, where each cell divides to form a colony with the same number and arrangement of cells as the parent colony. Many chlorophytes exclusively conduct asexual reproduction, but some display sexual reproduction, which may be isogamous (i.e., gametes of both sexes are identical), anisogamous (gametes are different) or oogamous (gametes are sperm and egg cells), with an evolutionary tendency towards oogamy. Their gametes are usually specialized cells differentiated from vegetative cells, although in unicellular Volvocales the vegetative cells can function simultaneously as gametes. Most chlorophytes have a diplontic life cycle (also known as zygotic), where the gametes fuse into a zygote which germinates, grows and eventually undergoes meiosis to produce haploid spores (gametes), similarly to ochrophytes and animals. Some exceptions display a haplodiplontic life cycle, where there is an alternation of generations, similarly to land plants. These generations can be isomorphic (i.e., of similar shape and size) or heteromorphic. The formation of reproductive cells usually does not occur in specialized cells, but some Ulvophyceae have specialized reproductive structures: gametangia, to produce gametes, and sporangia, to produce spores. The earliest-diverging unicellular chlorophytes (prasinophytes) produce walled resistant stages called cysts or 'phycoma' stages before reproduction; in some groups the cysts are as large as 230 μm in diameter. To develop them, the flagellate cells form an inner wall by discharging mucilage vesicles to the outside, increase the level of lipids in the cytoplasm to enhance buoyancy, and finally develop an outer wall. Inside the cysts, the nucleus and cytoplasm undergo division into numerous flagellate cells that are released by rupturing the wall. In some species these daughter cells have been confirmed to be gametes; otherwise, sexual reproduction is unknown in prasinophytes. Ecology. Free-living. Chlorophytes are an important portion of the phytoplankton in both freshwater and marine habitats, fixating more than a billion tons of carbon every year. They also live as multicellular macroalgae, or seaweeds, settled along rocky ocean shores. Most species of Chlorophyta are aquatic, prevalent in both marine and freshwater environments. About 90% of all known species live in freshwater. Some species have adapted to a wide range of terrestrial environments. For example, "Chlamydomonas nivalis" lives on summer alpine snowfields, and "Trentepohlia" species, live attached to rocks or woody parts of trees. Several species have adapted to specialised and extreme environments, such as deserts, arctic environments, hypersaline habitats, marine deep waters, deep-sea hydrothermal vents and habitats that experience extreme changes in temperature, light and salinity. Some groups, such as the Trentepohliales, are exclusively found on land. Symbionts. Several species of Chlorophyta live in symbiosis with a diverse range of eukaryotes, including fungi (to form lichens), ciliates, forams, cnidarians and molluscs. Some species of Chlorophyta are heterotrophic, either free-living or parasitic. Others are mixotrophic bacterivores through phagocytosis. Two common species of the heterotrophic green alga "Prototheca" are pathogenic and can cause the disease protothecosis in humans and animals. With the exception of the three classes Ulvophyceae, Trebouxiophyceae and Chlorophyceae in the UTC clade, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. All members of the clade have motile flagellated swimming cells. "Monostroma kuroshiense", an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group. Systematics. Taxonomic history. The first mention of Chlorophyta belongs to German botanist Heinrich Gottlieb Ludwig Reichenbach in his 1828 work "Conspectus regni vegetabilis". Under this name, he grouped all algae, mosses ('musci') and ferns ('filices'), as well as some seed plants ("Zamia" and "Cycas"). This usage did not gain popularity. In 1914, Bohemian botanist Adolf Pascher modified the name to encompass exclusively green algae, that is, algae which contain chlorophylls "a" and "b" and store starch in their chloroplasts. Pascher established a scheme where Chlorophyta was composed of two groups: Chlorophyceae, which included algae now known as Chlorophyta, and Conjugatae, which are now known as Zygnematales and belong to the Streptophyta clade from which land plants evolved. During the 20th century, many different classification schemes for the Chlorophyta arose. The Smith system, published in 1938 by American botanist Gilbert Morgan Smith, distinguished two classes: Chlorophyceae, which contained all green algae (unicellular and multicellular) that did not grow through an apical cell; and Charophyceae, which contained only multicellular green algae that grew via an apical cell and had special sterile envelopes to protect the sex organs. With the advent of electron microscopy studies, botanists published various classification proposals based on finer cellular structures and phenomena, such as mitosis, cytokinesis, cytoskeleton, flagella and cell wall polysaccharides. British botanist proposed in 1971 a scheme which distinguishes Chlorophyta from other green algal divisions Charophyta, Prasinophyta and Euglenophyta. He included four classes of chlorophytes: Zygnemaphyceae, Oedogoniophyceae, Chlorophyceae and Bryopsidophyceae. Other proposals retained the Chlorophyta as containing all green algae, and varied from one another in the number of classes. For example, the 1984 proposal by Mattox & Stewart included five classes, while the 1985 proposal by Bold & Wynne included only two, and the 1995 proposal by Christiaan van den Hoek and coauthors included up to eleven classes. The modern usage of the name 'Chlorophyta' was established in 2004, when phycologists Lewis & McCourt firmly separated the chlorophytes from the streptophytes on the basis of molecular phylogenetics. All green algae that were more closely related to land plants than to chlorophytes were grouped as a paraphyletic division Charophyta. Within the green algae, the earliest-branching lineages were grouped under the informal name of "prasinophytes", and they were all believed to belong to the Chlorophyta clade. However, in 2020 a study recovered a new clade and division known as Prasinodermophyta, which contains two prasinophyte lineages previously considered chlorophytes. Below is a cladogram representing the current state of green algal classification: Classification. Currently eleven chlorophyte classes are accepted, here presented in alphabetical order with some of their characteristics and biodiversity: Evolution. In February 2020, the fossilized remains of a green alga, named "Proterocladus antiquus" were discovered in the northern province of Liaoning, China. At around a billion years old, it is believed to be one of the oldest examples of a multicellular chlorophyte. It is currently classified as a member of order Siphonocladales, class Ulvophyceae. In 2023, a study calculated the molecular age of green algae as calibrated by this fossil. The study estimated the origin of Chlorophyta within the Mesoproterozoic era, at around 2.04–1.23 billion years ago. Usage. Model organisms. Among chlorophytes, a small group known as the volvocine green algae is being researched to understand the origins of cell differentiation and multicellularity. In particular, the unicellular flagellate "Chlamydomonas reinhardtii" and the colonial organism "Volvox carteri" are object of interest due to sharing homologous genes that in "Volvox" are directly involved in the development of two different cell types with full division of labor between swimming and reproduction, whereas in "Chlamydomonas" only one cell type exists that can function as a gamete. Other volvocine species, with intermediate characters between these two, are studied to further understand the transition towards the cellular division of labor, namely "Gonium pectorale", "Pandorina morum", "Eudorina elegans" and "Pleodorina starrii". Industrial uses. Chlorophyte microalgae are a valuable source of biofuel and various chemicals and products in industrial amounts, such as carotenoids, vitamins and unsaturated fatty acids. The genus "Botryococcus" is an efficient producer of hydrocarbons, which are converted into biodiesel. Various genera ("Chlorella", "Scenedesmus", "Haematococcus", "Dunaliella" and "Tetraselmis") are used as cellular factories of biomass, lipids and different vitamins for either human or animal consumption, and even for usage as pharmaceuticals. Some of their pigments are employed for cosmetics.
6776
17220920
https://en.wikipedia.org/wiki?curid=6776
Capybara
The capybara or greater capybara (Hydrochoerus hydrochaeris) is the largest living rodent, native to South America. It is a member of the genus "Hydrochoerus". The only other extant member is the lesser capybara ("Hydrochoerus isthmius"). Its close relatives include guinea pigs and rock cavies, and it is more distantly related to the agouti, the chinchilla, and the nutria. The capybara inhabits savannas and dense forests, and lives near bodies of water. It is a highly social species and can be found in groups as large as one hundred individuals, but usually live in groups of 10–20 individuals. The capybara is hunted for its meat and hide and also for grease from its thick fatty skin. Etymology. Its common name is derived from Tupi , a complex agglutination of (leaf) + (slender) + (eat) + (a suffix for agent nouns), meaning "one who eats slender leaves", or "grass-eater". The genus name, "hydrochoerus", comes from Greek (' "water") and (' "pig, hog") and the species name, "hydrochaeris", comes from Greek (' "water") and (' "feel happy, enjoy"). Classification and phylogeny. The capybara and the lesser capybara both belong to the subfamily Hydrochoerinae along with the rock cavies. The living capybaras and their extinct relatives were previously classified in their own family Hydrochoeridae. Since 2002, molecular phylogenetic studies have recognized a close relationship between "Hydrochoerus" and "Kerodon", the rock cavies, supporting placement of both genera in a subfamily of Caviidae. Paleontological classifications previously used Hydrochoeridae for all capybaras, while using Hydrochoerinae for the living genus and its closest fossil relatives, such as "Neochoerus", but more recently have adopted the classification of Hydrochoerinae within Caviidae. The taxonomy of fossil hydrochoerines is also in a state of flux. In recent years, the diversity of fossil hydrochoerines has been substantially reduced. This is largely due to the recognition that capybara molar teeth show strong variation in shape over the life of an individual. In one instance, material once referred to four genera and seven species on the basis of differences in molar shape is now thought to represent differently aged individuals of a single species, "Cardiatherium paranense". Among fossil species, the name "capybara" can refer to the many species of Hydrochoerinae that are more closely related to the modern "Hydrochoerus" than to the "cardiomyine" rodents like "Cardiomys". The fossil genera "Cardiatherium", "Phugatherium", "Hydrochoeropsis", and "Neochoerus" are all capybaras under that concept. Description. The capybara has a heavy, barrel-shaped body and short head, with reddish-brown fur on the upper part of its body that turns yellowish-brown underneath. Its sweat glands can be found in the surface of the hairy portions of its skin, an unusual trait among rodents. The animal lacks down hair, and its guard hair differs little from over hair. Adult capybaras grow to in length, stand tall at the withers, and typically weigh , with an average in the Venezuelan llanos of . Females are slightly heavier than males. The top recorded weights are for a wild female from Brazil and for a wild male from Uruguay. Also, an 81 kg individual was reported in São Paulo in 2001 or 2002. The dental formula is . Capybaras have slightly webbed feet and vestigial tails. Their hind legs are slightly longer than their forelegs; they have three toes on their rear feet and four toes on their front feet. Their muzzles are blunt, with nostrils, and the eyes and ears are near the top of their heads. Its karyotype has 2n = 66 and FN = 102, meaning it has 66 chromosomes with a total of 102 arms. Ecology. Capybaras are semiaquatic mammals found throughout all countries of South America except Chile. They live in densely forested areas near bodies of water, such as lakes, rivers, swamps, ponds, and marshes, as well as flooded savannah and along rivers in the tropical rainforest. They are superb swimmers and can hold their breath underwater for up to five minutes at a time. Capybara have flourished in cattle ranches. They roam in home ranges averaging in high-density populations. Many escapees from captivity can also be found in similar watery habitats around the world. Sightings are fairly common in Florida, although a breeding population has not yet been confirmed. In 2011, one specimen was spotted on the Central Coast of California. These escaped populations occur in areas where prehistoric capybaras inhabited; late Pleistocene capybaras inhabited Florida and "Hydrochoerus hesperotiganites" in California and "Hydrochoerus gaylordi" in Grenada, and feral capybaras in North America may actually fill the ecological niche of the Pleistocene species. Diet and predation. Capybaras are herbivores, grazing mainly on grasses and aquatic plants, as well as fruit and tree bark. They are very selective feeders and feed on the leaves of one species and disregard other species surrounding it. They eat a greater variety of plants during the dry season, as fewer plants are available. While they eat grass during the wet season, they have to switch to more abundant reeds during the dry season. Plants that capybaras eat during the summer lose their nutritional value in the winter, so they are not consumed at that time. The capybara's jaw hinge is not perpendicular, so they chew food by grinding back-and-forth rather than side-to-side. Capybaras are autocoprophagous, meaning they eat their own feces as a source of bacterial gut flora, to help digest the cellulose in the grass that forms their normal diet, and to extract the maximum protein and vitamins from their food. They also regurgitate food to masticate again, similar to cud-chewing by cattle. Like other rodents, a capybara's front teeth grow continually to compensate for the constant wear from eating grasses; their cheek teeth also grow continuously. Like its relative the guinea pig, the capybara does not have the capacity to synthesize vitamin C, and capybaras not supplemented with vitamin C in captivity have been reported to develop gum disease as a sign of scurvy. The maximum lifespan of the capybara is 8 to 10 years, but in the wild capybaras usually do not live longer than four years because of predation from South American big cats such as jaguars and cougars and from non-mammalian predators such as harpy eagles, caimans, green anacondas and piranhas. Social organization. Capybaras are known to be gregarious. While they sometimes live solitarily, they are more commonly found in groups of around 10–20 individuals, with two to four adult males, four to seven adult females, and the remainder juveniles. Capybara groups can consist of as many as 50 or 100 individuals during the dry season when the animals gather around available water sources. Males establish social bonds, dominance, or general group consensus. They can make dog-like barks when threatened or when females are herding young. Capybaras have two types of scent glands: a morrillo, located on the snout, and anal glands. Both sexes have these glands, but males have much larger morrillos and use their anal glands more frequently. The anal glands of males are also lined with detachable hairs. A crystalline form of scent secretion is coated on these hairs and is released when in contact with objects such as plants. These hairs have a longer-lasting scent mark and are tasted by other capybaras. Capybaras scent-mark by rubbing their morrillos on objects, or by walking over scrub and marking it with their anal glands. Capybaras can spread their scent farther by urinating; however, females usually mark without urinating and scent-mark less frequently than males overall. Females mark more often during the wet season when they are in estrus. In addition to objects, males also scent-mark females. Reproduction. When in estrus, the female's scent changes subtly and nearby males begin pursuit. In addition, a female alerts males she is in estrus by whistling through her nose. During mating, the female has the advantage and mating choice. Capybaras mate only in water, and if a female does not want to mate with a certain male, she either submerges or leaves the water. Dominant males are highly protective of the females, but they usually cannot prevent some of the subordinates from copulating. The larger the group, the harder it is for the male to watch all the females. Dominant males secure significantly more matings than each subordinate, but subordinate males, as a class, are responsible for more matings than each dominant male. The lifespan of the capybara's sperm is longer than that of other rodents. Capybara gestation is 130–150 days, and produces a litter of four young on average, but may produce between one and eight in a single litter. Birth is on land and the female rejoins the group within a few hours of delivering the newborn capybaras, which join the group as soon as they are mobile. Within a week, the young can eat grass, but continue to suckle—from any female in the group—until weaned around 16 weeks. The young form a group within the main group. Alloparenting has been observed in this species. Breeding peaks between April and May in Venezuela and between October and November in Mato Grosso, Brazil. Activities. Though quite agile on land, capybaras are equally at home in the water. They are excellent swimmers, and can remain completely submerged for up to five minutes, an ability they use to evade predators. Capybaras can sleep in water, keeping only their noses out. As temperatures increase during the day, they wallow in water and then graze during the late afternoon and early evening. They also spend time wallowing in mud. They rest around midnight and then continue to graze before dawn. Communication. Capybaras communicate using barks, chirps, whistles, huffs, and purrs. Conservation and human interaction. Capybaras are not considered a threatened species; their population is stable throughout most of their South American range, though in some areas hunting has reduced their numbers. Capybaras are hunted for their meat and pelts in some areas, and otherwise killed by humans who see their grazing as competition for livestock. In some areas, they are farmed, which has the effect of ensuring the wetland habitats are protected. Their survival is aided by their ability to breed rapidly. Capybaras have adapted well to urbanization in South America. They can be found in many areas in zoos and parks, and may live for 12 years in captivity, more than double their wild lifespan. Capybaras are docile and usually allow humans to pet and hand-feed them, but physical contact is normally discouraged, as their ticks can be vectors to Rocky Mountain spotted fever. The European Association of Zoos and Aquaria asked Drusillas Park in Alfriston, Sussex, England, to keep the studbook for capybaras, to monitor captive populations in Europe. The studbook includes information about all births, deaths and movements of capybaras, as well as how they are related. Capybaras are farmed for meat and skins in South America. The meat is considered unsuitable to eat in some areas, while in other areas it is considered an important source of protein. In parts of South America, especially in Venezuela, capybara meat is popular during Lent and Holy Week as the Catholic Church (according to a legend) previously issued special dispensation to allow it to be eaten while other meats are generally forbidden. There is widespread perception in Venezuela that consumption of capybaras is exclusive to rural people. In August 2021, Argentine and international media reported that capybaras had been disturbing residents of Nordelta, an affluent gated community north of Buenos Aires built atop the local capybara's preexisting wetland habitat. This inspired social media users to jokingly adopt the capybara as a symbol of class struggle and communism. Brazilian Lyme-like borreliosis likely involves capybaras as reservoirs and "Amblyomma" and "Rhipicephalus" ticks as vectors. A Capybara café in St. Augustine, Florida allows visitors to interact with and give head scratches to the rodents. In popular culture. Izu Shaboten Zoo and other zoos in Japan have prepared hot spring baths for capybaras. Video clips of the bathing capybaras have gained millions of views. The capybaras have influenced an anime character named "Kapibara-san", and a series of merchandise such as plush toys. Capybaras have long been a figure in meme culture, particularly in the 2020s. In 2022, Peronists in Argentina presented them as figures of class struggle after the disturbances in Nordelta. Common meme formats pair capybaras with the song "After Party" by Don Toliver.
6777
6908984
https://en.wikipedia.org/wiki?curid=6777
Computer animation
Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation refers to moving images. Modern computer animation usually uses 3D computer graphics. Computer animation is a digital successor to stop motion and traditional animation. Instead of a physical model or illustration, a digital equivalent is manipulated frame-by-frame. Also, computer-generated animations allow a single graphic artist to produce such content without using actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new similar image but advanced slightly in time (usually at a rate of 24, 25, or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures. To trick the visual system into seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second or faster (a frame is one complete image). With rates above 75 to 120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye and the brain both process images. At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates. Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the appearance of continuous movement. Computer-generated animation. Computer-generated animation is an umbrella term for three-dimensional (3D) animation, and 2D computer animation. These also include subcategories like asset driven, hybrid, and digital drawn animation. Creators animate using code or software instead of pencil-to-paper drawings. There are many techniques and disciplines in computer generated animation, some of which are digital representations of traditional animation - such as key frame animation - and some of which are only possible with a computer - such fluid simulation. 'CG' Animators can break physical laws by using mathematical algorithms to cheat mass, force and gravity, and more. Fundamentally, computer-generated animation is a powerful tool which can improve the quality of animation by using the power of computing to unleash the animator's imagination. This is because Computer Generated Animation allows for things like onion skinning which allows 2D animators to see the flow of their work all at once, and interpolation which allows 3D animators to automate the process of inbetweening. 3D computer animation. Overview. For 3D computer animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. Normally, the differences between key frames are drawn in a process known as tweening. However, in 3D computer animation, this is done automatically, and is called interpolation. Finally, the animation is rendered and composited. Before becoming a final product, 3D computer animations only exist as a series of moving shapes and systems within 3d software, and must be rendered. This can happen as a separate process for animations developed for movies and short films, or it can be done in real-time when animated for videogames. After an animation is rendered, it can be composited into a final product. Animation attributes. For 3D models, attributes can describe any characteristic of the object that can be animated. This includes transformation (movement from one point to another), scaling, rotation, and more complex attributes like blend shape progression (morphing from one shape to another). Each attribute gets a channel on which keyframes can be set. These keyframes can be used in more complex ways such as animating in layers (combining multiple sets of key frame data), or keying control objects to deform or control other objects. For instance, a character's arms can have a skeleton applied, and the joints can have transformation and rotation keyframes set. The movement of the arm joints will then cause the arm shape to deform. Interpolation. 3D animation software interpolates between keyframes by generating a spline between keys plotted on a graph which represents the animation. Additionally, these splines can follow Bézier curves to control how the spline curves relative to the keyframes. Using interpolation allows 3D animators to dynamically change animations without having to redo all the in-between animation. This also allows the creation of complex movements such as ellipses with only a few keyframes. Lastly, interpolation allows the animator to change the framerate, timing, and even scale of the movements at any point in the animation process. Procedural and node-based Animation. Another way to automate 3D animation is to use procedural tools such as 4D noise. Noise is any algorithm that plots pseudo-random values within a dimensional space. 4D noise can be used to do things like move a swarm of bees around; the first three dimensions correspond to the position of the bees in space, and the fourth is used to change the bee's position over time. Noise can also be used as a cheap replacement for simulation. For example, smoke and clouds can be animated using noise. Node-based animation is useful for animating organic and chaotic shapes. By using nodes, an animator can build up a complex set of animation rules that can be applied either to many objects at once, or one very complex object. A good example of this would be setting the movement of particles to match the beat of a song. Disciplines of 3D animation. There are many different disciplines of 3D animation, some of which include entirely separate artforms. For example, hair simulation for computer animated characters in and of itself is a career path which involves separate workflows, and different software and tools. The combination of all or some 3D computer animation disciplines is commonly referred to within the animation industry as the 3D animation pipeline. 2D computer animation. 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Computer animation is essentially a digital successor to stop motion techniques, but using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without that virtual skeleton. 2D sprites and pseudocode. In 2D computer animation, moving objects are often referred to as "sprites." A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocode makes a sprite move from left to right: var "int" x := 0, y := screenHeight / 2; while x < screenWidth drawBackground() drawSpriteAtXY (x, y) "// draw on top of the background" x := x + 5 "// move to the right" Computer-assisted animation. Computer-assisted animation is usually classed as two-dimensional (2D) animation and is also known as digital ink and paint. Drawings are either hand drawn (pencil to paper) or interactively drawn (on the computer) using different assisting appliances and are positioned into specific software packages. Within the software package, the creator places drawings into different key frames which fundamentally create an outline of the most important movements. The computer then fills in the "in-between frames", a process commonly known as Tweening. Computer-assisted animation employs new technologies to produce content faster than is possible with traditional animation, while still retaining the stylistic elements of traditionally drawn characters or objects. Examples of films produced using computer-assisted animation are the rainbow sequence at the end of "The Little Mermaid" (the rest of the films listed use digital ink and paint in their entirety), "The Rescuers Down Under", "Beauty and the Beast", "Aladdin", "The Lion King", "Pocahontas", "The Hunchback of Notre Dame", "Hercules", "Mulan", "Tarzan", "We're Back! A Dinosaur's Story", "Balto", "Anastasia", "Titan A.E.", "The Prince of Egypt", "The Road to El Dorado", ' and '. History. Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll. Other digital animation was also practiced at the Lawrence Livermore National Laboratory. In 1967, a computer animation named "Hummingbird" was created by Charles Csuri and James Shaffer. In 1968, a computer animation called "" was created with BESM-4 by Nikolai Konstantinov, depicting a cat moving around. In 1971, a computer animation called "Metadata" was created, showing various shapes. An early step in the history of computer animation was the sequel to the 1973 film "Westworld," a science-fiction film about a society in which robots live and work among humans. The sequel, "Futureworld" (1976), used the 3D wire-frame imagery, which featured a computer-animated hand and face both created by University of Utah graduates Edwin Catmull and Fred Parke. This imagery originally appeared in their student film "A Computer Animated Hand", which they completed in 1972. Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques that is attended by thousands of computer professionals each year. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima. Film and television. CGI short films have been produced as independent animation since 1976. Early examples of feature films incorporating CGI animation include the live-action films ' and "Tron" (both 1982), and the Japanese anime film ' (1983). "VeggieTales" is the first American fully 3D computer-animated series sold directly (made in 1993); its success inspired other animation series, such as "ReBoot" (1994) and "" (1996) to adopt a fully computer-generated style. The first full-length computer-animated television series was "ReBoot", which debuted in September 1994; the series followed the adventures of characters who lived inside a computer. The first feature-length computer-animated film is "Toy Story" (1995), which was made by Disney and Pixar: following an adventure centered around anthropomorphic toys and their owners, this groundbreaking film was also the first of many fully computer-animated movies. The popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation. Films like "Avatar" (2009) and "The Jungle Book" (2016) use CGI for the majority of the movie runtime, but still incorporate human actors into the mix. Computer animation in this era has achieved photorealism, to the point that computer-animated films such as "The Lion King" (2019) are able to be marketed as if they were live-action. Animation methods. In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, which is analogous to a skeleton or stick figure. They are arranged into a default position known as a bind pose, or T-Pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist). The character "Woody" in "Toy Story", for example, uses 712 Avars (212 in the face alone). The computer does not usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame. There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called "keyframing". Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation. In contrast, a newer method called "motion capture" makes use of live action footage. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. Their motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character. Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film "", Bill Nighy provided the performance for the character Davy Jones. Even though Nighy does not appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming. Modeling. 3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as "rigging", the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two. 3D models rigged for animation may contain thousands of control points — for example, "Woody" from "Toy Story" uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie "", which had about 1,851 controllers (742 in the face alone). In the 2004 film "The Day After Tomorrow", designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of "King Kong", actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in Peter Jackson's "The Lord of the Rings" trilogy. Equipment. Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can require much time on an ordinary home computer. Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used; Silicon Graphics said in 1989 that the animation industry's needs typically caused graphical innovations in workstations. Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. Many workstations (known as a "render farm") are networked together to effectively act as a giant computer, resulting in a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000 to $16,000 with the more expensive stations being able to render much faster due to the more technologically advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation. Programs like Blender allow for people who can not afford expensive animation and rendering software to be able to work in a similar manner to those who use the commercial grade equipment. Facial animation. The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on "State of the art in Facial Animation" in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements and sparked interest among a number of researchers. The Facial Action Coding System (with 46 "action units", "lip bite" or "squint"), which had been developed in 1976, became a popular basis for many systems. As early as 2001, MPEG-4 included 68 Face Animation Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased. In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to the faces of avatars. In this approach, the PAD model is used as a high level emotional space and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model. Realism. Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph or make the characters' animation believable and lifelike. Computer animation can also be realistic with or without the photorealistic rendering. One trend in computer animation has been the effort to create human characters that look and move with the highest degree of realism. A possible outcome when attempting to make pleasing, realistic human characters is the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly negative, emotional response as a human replica looks and acts more and more human. Films that have attempted photorealistic human characters, such as "The Polar Express", "Beowulf", and "A Christmas Carol" have been criticized as "disconcerting" and "creepy". The goal of computer animation is not always to emulate live action as closely as possible, so many animated films instead feature characters who are anthropomorphic animals, legendary creatures and characters, superheroes, or otherwise have non-realistic, cartoon-like proportions. Computer animation can also be tailored to mimic or substitute for other kinds of animation, like traditional stop-motion animation (as shown in "Flushed Away" or "The Peanuts Movie"). Some of the long-standing basic principles of animation, like squash and stretch, call for movement that is not strictly realistic, and such principles still see widespread application in computer animation. Web animations. The popularity of websites that allow members to upload their own movies for others to view has created a growing community of independent and amateur computer animators. With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open-source animation software applications exist as well. The ease at which these animations can be distributed has attracted professional animation talent also. Companies such as PowToon and Vyond attempt to bridge the gap by giving amateurs access to professional animations as clip art. The oldest (most backward compatible) web-based animations are in the animated GIF format, which can be uploaded and seen on the web easily. However, the raster graphics format of GIF animations slows the download and frame rate, especially with larger screen sizes. The growing demand for higher quality web-based animations was met by a vector graphics alternative that relied on the use of a plugin. For decades, Flash animations were a common format, until the web development community abandoned support for the Flash Player plugin. Web browsers on mobile devices and mobile operating systems never fully supported the Flash plugin. By this time, internet bandwidth and download speeds increased, making raster graphic animations more convenient. Some of the more complex vector graphic animations had a slower frame rate due to complex rendering compared to some of the raster graphic alternatives. Many of the GIF and Flash animations were already converted to digital video formats, which were compatible with mobile devices and reduced file sizes via video compression technology. However, compatibility was still problematic as some of the video formats such as Apple's QuickTime and Microsoft Silverlight required plugins. YouTube was also relying on the Flash plugin to deliver digital video in the Flash Video format. The latest alternatives are HTML5 compatible animations. Technologies such as JavaScript and CSS animations made sequencing the movement of images in HTML5 web pages more convenient. SVG animations offered a vector graphic alternative to the original Flash graphic format, SmartSketch. YouTube offers an HTML5 alternative for digital video. APNG (Animated PNG) offered a raster graphic alternative to animated GIF files that enables multi-level transparency not available in GIFs. Detailed example. Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three-dimensional polygons, apply "textures", lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting Boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.
6778
49534133
https://en.wikipedia.org/wiki?curid=6778
Ceawlin of Wessex
Ceawlin ( ; also spelled Ceaulin, Caelin, Celin, died "ca." 593) was a King of Wessex. He may have been the son of Cynric of Wessex and the grandson of Cerdic of Wessex, whom the "Anglo-Saxon Chronicle" represents as the leader of the first group of Saxons to come to the land which later became Wessex. Ceawlin was active during the last years of the Anglo-Saxon expansion, with little of southern England remaining in the control of the native Britons by the time of his death. The chronology of Ceawlin's life is highly uncertain. The historical accuracy and dating of many of the events in the later "Anglo-Saxon Chronicle" have been called into question, and his reign is variously listed as lasting seven, seventeen, or thirty-two years. The "Chronicle" records several battles of Ceawlin's between the years 556 and 592, including the first record of a battle between different groups of Anglo-Saxons, and indicates that under Ceawlin Wessex acquired significant territory, some of which was later to be lost to other Anglo-Saxon kingdoms. Ceawlin is also named as one of the eight ""bretwaldas", a title given in the "Chronicle" to eight rulers who had overlordship over southern Britain, although the extent of Ceawlin's control is not known. Ceawlin died in 593, having been deposed the year before, possibly by his successor, Ceol. He is recorded in various sources as having two sons, Cutha and Cuthwine, but the genealogies in which this information is found are known to be unreliable. Historical context. The history of the sub-Roman period in Britain is poorly sourced and the subject of a number of important disagreements among historians. It appears, however, that in the fifth century, raids on Britain by continental peoples developed into migrations. The newcomers included Angles, Saxons, Jutes and Frisians. These peoples captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mons Badonicus halted the Anglo-Saxon advance for fifty years. Near the year 550, however, the British began to lose ground once more, and within twenty-five years, it appears that control of almost all of southern England was in the hands of the invaders. The peace following the battle of Mons Badonicus is attested partly by Gildas, a monk, who wrote "De Excidio et Conquestu Britanniae" or "On the Ruin and Conquest of Britain" during the middle of the sixth century. This essay is a polemic against corruption and Gildas provides little in the way of names and dates. He appears, however, to state that peace had lasted from the year of his birth to the time he was writing. The "Anglo-Saxon Chronicle" is the other main source that bears on this period, in particular in an entry for the year 827 that records a list of the kings who bore the title "bretwalda"", or "Britain-ruler". That list shows a gap in the early sixth century that matches Gildas's version of events. Ceawlin's reign belongs to the period of Anglo-Saxon expansion at the end of the sixth century. Though there are many unanswered questions about the chronology and activities of the early West Saxon rulers, it is clear that Ceawlin was one of the key figures in the final Anglo-Saxon conquest of southern Britain. Early West Saxon sources. The two main written sources for early West Saxon history are the "Anglo-Saxon Chronicle" and the West Saxon Genealogical Regnal List. The "Chronicle" is a set of annals which were compiled near the year 890, during the reign of King Alfred the Great of Wessex. They record earlier material for the older entries, which were assembled from earlier annals that no longer survive, as well as from saga material that might have been transmitted orally. The "Chronicle" dates the arrival of the future "West Saxons" in Britain to 495, when Cerdic and his son, Cynric, land at "Cerdices ora", or Cerdic's shore. Almost twenty annals describing Cerdic's campaigns and those of his descendants appear interspersed through the next hundred years of entries in the "Chronicle". Although these annals provide most of what is known about Ceawlin, the historicity of many of the entries is uncertain. The West Saxon Genealogical Regnal List is a list of rulers of Wessex, including the lengths of their reigns. It survives in several forms, including as a preface to the [B] manuscript of the "Chronicle". Like the "Chronicle", the List was compiled in its present form during the reign of Alfred the Great, but an earlier version of the List was also one of the sources of the "Chronicle" itself. Both the list and the "Chronicle" are influenced by the desire of their writers to use a single line of descent to trace the lineage of the Kings of Wessex through Cerdic to Gewis, the legendary eponymous ancestor of the West Saxons, who is made to descend from Woden. The result served the political purposes of the scribe but is riddled with contradictions for historians. The contradictions may be seen clearly by calculating dates by different methods from various sources. The first event in West Saxon history whose date can be regarded as reasonably certain is the baptism of Cynegils, which occurred in the late 630s, perhaps as late as 640. The "Chronicle" dates Cerdic's arrival to 495, but adding up the lengths of the reigns as given in the West Saxon Genealogical Regnal List leads to the conclusion that Cerdic's reign might have started in 532, a difference of 37 years. Neither 495 nor 532 may be treated as reliable; however, the latter date relies on the presumption that the Regnal List is correct in presenting the Kings of Wessex as having succeeded one another, with no omitted kings, and no joint kingships, and that the durations of the reigns are correct as given. None of these presumptions may be made safely. The sources also are inconsistent on the length of Ceawlin's reign. The "Chronicle" gives it as thirty-two years, from 560 to 592, but the manuscripts of the Regnal List disagree: different copies give it as seven or seventeen years. David Dumville's detailed study of the Regnal List finds that it originally dated the arrival of the West Saxons in England to 532, and favours seven years as the earliest claimed length of Ceawlin's reign, with dates of 581–588 proposed. Dumville suggests that Ceawlin's reign length was then inflated to help extend the longevity of the Cerdicing dynasty further back into the past and that Ceawlin's reign specifically was extended because he is mentioned by Bede, giving him a status which led later West Saxon historians to conclude that he deserved a more impressive-looking reign. The sources do agree that Ceawlin is the son of Cynric and he usually is named as the father of Cuthwine. There is one discrepancy in this case: the entry for 685 in the [A] version of the "Chronicle" assigns Ceawlin a son, Cutha, but in the 855 entry in the same manuscript, Cutha is listed as the son of Cuthwine. Cutha also is named as Ceawlin's brother in the [E] and [F] versions of the "Chronicle", in the 571 and 568 entries, respectively. Whether Ceawlin is a descendant of Cerdic is a matter of debate. Subgroupings of different West Saxon lineages give the impression of separate groups, of which Ceawlin's line is one. Some of the problems in the Wessex genealogies may have come about because of efforts to integrate Ceawlin's line with the other lineages: it became very important to the West Saxons to be able to trace the ancestors of their rulers back to Cerdic. Another reason for doubting the literal nature of these early genealogies is that the etymology of the names of several early members of the dynasty does not appear to be Germanic, as would be expected in the names of leaders of an apparently Anglo-Saxon dynasty. The name "Ceawlin" has no convincing Old English etymology; it seems more likely to be of British origin. The earliest sources do not use the term "West Saxon". According to Bede's "Ecclesiastical History of the English People", the term is interchangeable with the Gewisse. The term "West Saxon" appears only in the late seventh century, after the reign of Cædwalla. West Saxon expansion. Ultimately, the kingdom of Wessex occupied the southwest of England, but the initial stages in this expansion are not apparent from the sources. Cerdic's landing, whenever it is to be dated, seems to have been near the Isle of Wight, and the annals record the conquest of the island in 530. In 534, according to the "Chronicle", Cerdic died and his son Cynric took the throne; the "Chronicle" adds that "they gave the Isle of Wight to their nephews, Stuf and Wihtgar". These records are in direct conflict with Bede, who states that the Isle of Wight was settled by Jutes, not Saxons; the archaeological record is somewhat in favour of Bede on this. Subsequent entries in the "Chronicle" give details of some of the battles by which the West Saxons won their kingdom. Ceawlin's campaigns are not given as near the coast. They range along the Thames Valley and beyond, as far as Surrey in the east and the mouth of the Severn in the west. Ceawlin clearly is part of the West Saxon expansion, but the military history of the period is difficult to understand. In what follows the dates are as given in the "Chronicle", although, as noted above, these are earlier than now thought accurate. 556:. The first record of a battle fought by Ceawlin is in 556, when he and his father, Cynric, fought the native Britons at "", or Bera's Stronghold. This now is identified as Barbury Castle, an Iron Age hill fort in Wiltshire, near Swindon. Cynric would have been king of Wessex at this time. 568: Wibbandun. The first battle Ceawlin fought as king is dated by the "Chronicle" to 568 when he and Cutha fought with Æthelberht, the king of Kent. The entry says "Here Ceawlin and Cutha fought against Aethelberht and drove him into Kent; and they killed two ealdormen, Oslaf and Cnebba, on Wibbandun." The location of "Wibbandun", which can be translated as "Wibba's Mount", has not been identified definitely; it was at one time thought to be Wimbledon, but this now is known to be incorrect. David Cooper proposes Wyboston, a small village 8 miles north-east of Bedford on the west bank of the Great Ouse. Wibbandun is often written as Wibba's Dun, which is close phonetically to Wyboston and Æthelberht's dominance, from Kent to the Humber according to Bede, extended across those Anglian territories south of the Wash. It was this region that came under threat from Ceawlin as he looked to establish a defensible boundary on the Great Ouse River in the easternmost part of his territory. In addition, Cnebba, named as slain in this battle, has been associated with Knebworth, which lies 20 miles to the south of Wyboston. Half a mile south of Wyboston is a village called Chawston. The origin of the place name is unknown but might be derived from the Old English "Ceawston" or "Ceawlinston". A defeat at Wyboston for Æthelberht would have damaged his overlord status and diminished his influence over the Anglians. The idea that he was driven or "pursued" into Kent (depending on which Anglo-Saxon Chronicle translation is preferred) should not be taken literally. Similar phraseology is often found in the Chronicle when one king bests another. A defeat suffered as part of an expedition to help his Anglian clients would have caused Æthelberht to withdraw into Kent to recover. This battle is notable as the first recorded conflict between the invading peoples: previous battles recorded in the "Chronicle" are between the Anglo-Saxons and the native Britons. There are multiple examples of joint kingship in Anglo-Saxon history, and this may be another: it is not clear what Cutha's relationship to Ceawlin is, but it certainly is possible he was also a king. The annal for 577, below, is another possible example. 571: Bedcanford. The annal for 571 reads: "Here Cuthwulf fought against the Britons at Bedcanford, and took four settlements: Limbury and Aylesbury, Benson and Eynsham; and in the same year he passed away." Cuthwulf's relationship with Ceawlin is unknown, but the alliteration common to Anglo-Saxon royal families suggests Cuthwulf may be part of the West Saxon royal line. The location of the battle itself is unidentified. It has been suggested that it was Bedford, but what is known of the early history of Bedford's names does not support this. This battle is of interest because it is surprising that an area so far east should still be in Briton hands this late: there is ample archaeological evidence of early Saxon and Anglian presence in the Midlands, and historians generally have interpreted Gildas's "De Excidio" as implying that the Britons had lost control of this area by the mid-sixth century. One possible explanation is that this annal records a reconquest of land that was lost to the Britons in the campaigns ending in the battle of Mons Badonicus. 577: Lower Severn. The annal for 577 reads "Here Cuthwine and Ceawlin fought against the Britons, and they killed three kings, Coinmail and Condidan and Farinmail, in the place which is called Dyrham, and took three cities: Gloucester and Cirencester and Bath." This entry is all that is known of these Briton kings; their names are in an archaic form that makes it very likely that this annal derives from a much older written source. The battle itself has long been regarded as a key moment in the Saxon advance, since in reaching the Bristol Channel, the West Saxons divided the Britons west of the Severn from land communication with those in the peninsula to the south of the Channel. Wessex almost certainly lost this territory to Penda of Mercia in 628, when the "Chronicle" records that "Cynegils and Cwichelm fought against Penda at Cirencester and then came to an agreement." It is possible that when Ceawlin and Cuthwine took Bath, they found the Roman baths still operating to some extent. Nennius, a ninth-century historian, mentions a "Hot Lake" in the land of the Hwicce, which was along the Severn, and adds "It is surrounded by a wall, made of brick and stone, and men may go there to bathe at any time, and every man can have the kind of bath he likes. If he wants, it will be a cold bath; and if he wants a hot bath, it will be hot". Bede also describes hot baths in the geographical introduction to the "Ecclesiastical History" in terms very similar to those of Nennius. Wansdyke, an early-medieval defensive linear earthwork, runs from south of Bristol to near Marlborough, Wiltshire, passing not far from Bath. It probably was built in the fifth or sixth centuries, perhaps by Ceawlin. 584: Fethan leag. Ceawlin's last recorded victory is in 584. The entry reads "Here Ceawlin and Cutha fought against the Britons at the place which is named Fethan leag, and Cutha was killed, and Ceawlin took many towns and countless war-loot, and in anger, he turned back to his own [territory]." There is a wood named "Fethelée" mentioned in a twelfth-century document that relates to Stoke Lyne, in Oxfordshire, and it now is thought that the battle of Fethan leag must have been fought in this area. The phrase "in anger he turned back to his own" probably indicates that this annal is drawn from saga material, as perhaps are all of the early Wessex annals. It also has been used to argue that perhaps, Ceawlin did not win the battle and that the chronicler chose not to record the outcome fully—a king does not usually come home "in anger" after taking "many towns and countless war-loot". It may be that Ceawlin's overlordship of the southern Britons came to an end with this battle. Bretwaldaship. About 731, Bede, a Northumbrian monk and chronicler, wrote a work called the "Ecclesiastical History of the English People". The work was not primarily a secular history, but Bede provides much information about the history of the Anglo-Saxons, including a list early in the history of seven kings who, he said, held "imperium" over the other kingdoms south of the Humber. The usual translation for "imperium" is "overlordship". Bede names Ceawlin as the second on the list, although he spells it "Caelin", and adds that he was "known in the speech of his own people as Ceaulin". Bede also makes it clear that Ceawlin was not a Christian—Bede mentions a later king, Æthelberht of Kent, as "the first to enter the kingdom of heaven". The "Anglo-Saxon Chronicle," in an entry for the year 827, repeats Bede's list, adds Egbert of Wessex, and also mentions that they were known as "bretwalda", or "Britain-ruler". A great deal of scholarly attention has been given to the meaning of this word. It has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership. Bede says that these kings had authority "south of the Humber", but the span of control, at least of the earlier bretwaldas, likely was less than this. In Ceawlin's case the range of control is hard to determine accurately, but Bede's inclusion of Ceawlin in the list of kings who held "imperium", and the list of battles he is recorded as having won, indicates an energetic and successful leader who, from a base in the upper Thames valley, dominated much of the surrounding area and held overlordship over the southern Britons for some period. Despite Ceawlin's military successes, the northern conquests he made could not always be retained: Mercia took much of the upper Thames valley, and the north-eastern towns won in 571 were among territory subsequently under the control of Kent and Mercia at different times. Bede's concept of the power of these overlords also must be regarded as the product of his eighth century viewpoint. When the "Ecclesiastical History" was written, Æthelbald of Mercia dominated the English south of the Humber, and Bede's view of the earlier kings was doubtless strongly coloured by the state of England at that time. For the earlier "bretwaldas", such as Ælle and Ceawlin, there must be some element of anachronism in Bede's description. It also is possible that Bede only meant to refer to power over Anglo-Saxon kingdoms, not the native Britons. Ceawlin is the second king on Bede's list. All the subsequent bretwaldas followed more or less consecutively, but there is a long gap, perhaps fifty years, between Ælle of Sussex, the first bretwalda, and Ceawlin. The lack of gaps between the overlordships of the later bretwaldas has been used to make an argument for Ceawlin's dates matching the later entries in the "Chronicle" with reasonable accuracy. According to this analysis, the next bretwalda, Æthelberht of Kent, must have been already a dominant king by the time Pope Gregory the Great wrote to him in 601, since Gregory would have not written to an underking. Ceawlin defeated Æthelberht in 568 according to the "Chronicle". Æthelberht's dates are a matter of debate, but recent scholarly consensus has his reign starting no earlier than 580. The 568 date for the battle at Wibbandun is thought to be unlikely because of the assertion in various versions of the West Saxon Genealogical Regnal List that Ceawlin's reign lasted either seven or seventeen years. If this battle is placed near the year 590, before Æthelberht had established himself as a powerful king, then the subsequent annals relating to Ceawlin's defeat and death may be reasonably close to the correct date. In any case, the battle with Æthelberht is unlikely to have been more than a few years on either side of 590. The gap between Ælle and Ceawlin, on the other hand, has been taken as supporting evidence for the story told by Gildas in "De Excidio" of a peace lasting a generation or more following a Briton victory at Mons Badonicus. Æthelberht of Kent succeeds Ceawlin on the list of bretwaldas, but the reigns may overlap somewhat: recent evaluations give Ceawlin a likely reign of 581–588, and place Æthelberht's accession near to the year 589, but these analyses are no more than scholarly guesses. Ceawlin's eclipse in 592, probably by Ceol, may have been the occasion for Æthelberht to rise to prominence; Æthelberht very likely was the dominant Anglo-Saxon king by 597. Æthelberht's rise may have been earlier: the 584 annal, even if it records a victory, is the last victory of Ceawlin's in the "Chronicle", and the period after that may have been one of Æthelberht's ascent and Ceawlin's decline. Wessex at Ceawlin's death. Ceawlin lost the throne of Wessex in 592. The annal for that year reads, in part: "Here there was great slaughter at Woden's Barrow, and Ceawlin was driven out." Woden's Barrow is a tumulus, now called Adam's Grave, at Alton Priors, Wiltshire. No details of his opponent are given. The medieval chronicler William of Malmesbury, writing in about 1120, says that it was "the Angles and the British conspiring together". Alternatively, it may have been Ceol, who is supposed to have been the next king of Wessex, ruling for six years according to the West Saxon Genealogical Regnal List. According to the "Anglo-Saxon Chronicle", Ceawlin died the following year. The relevant part of the annal reads: "Here Ceawlin and Cwichelm and Crida perished." Nothing more is known of Cwichelm and Crida, although they may have been members of the Wessex royal house—their names fit the alliterative pattern common to royal houses of the time. According to the Regnal List, Ceol was a son of Cutha, who was a son of Cynric; and Ceolwulf, his brother, reigned for seventeen years after him. It is possible that some fragmentation of control among the West Saxons occurred at Ceawlin's death: Ceol and Ceolwulf may have been based in Wiltshire, as opposed to the upper Thames valley. This split also may have contributed to Æthelberht's ability to rise to dominance in southern England. The West Saxons remained influential in military terms, however: the "Chronicle" and Bede record continued military activity against Essex and Sussex within twenty or thirty years of Ceawlin's death. External links.
6779
2016996
https://en.wikipedia.org/wiki?curid=6779
Christchurch (disambiguation)
Christchurch is the largest city in the South Island of New Zealand. Christchurch may also refer to:
6780
18872885
https://en.wikipedia.org/wiki?curid=6780
CD-R
CD-R (Compact disc-recordable) is a digital optical disc storage format. A CD-R disc is a compact disc that can only be written once and read arbitrarily many times. CD-R discs (CD-Rs) are readable by most CD readers manufactured prior to the introduction of CD-R, unlike CD-RW discs. History. Originally named CD Write-Once (WO), the CD-R specification was first published in 1988 by Philips and Sony in the Orange Book, which consists of several parts that provide details of the CD-WO, CD-MO (Magneto-Optic), and later CD-RW (Re Writable). The latest editions have abandoned the use of the term "CD-WO" in favor of "CD-R", while "CD-MO" was rarely used. Written CD-Rs and CD-RWs are, in the aspect of low-level encoding and data format, fully compatible with the audio CD ("Red Book" CD-DA) and data CD ("Yellow Book" CD-ROM) standards. The Yellow Book standard for CD-ROM only specifies a high-level data format and refers to the Red Book for all physical format and low-level code details, such as track pitch, linear bit density, and bitstream encoding. This means they use Eight-to-Fourteen Modulation, CIRC error correction, and, for CD-ROM, the third error correction layer defined in the Yellow Book. Properly written CD-R discs on blanks of less than 80 minutes in length are fully compatible with the audio CD and CD-ROM standards in all details including physical specifications. 80-minute CD-R discs marginally violate the Red Book physical format specifications, and longer discs are non-compliant. CD-RW discs have lower reflectivity than CD-R or pressed (non-writable) CDs and for this reason cannot meet the Red Book standard. Some hardware compatible with Red Book CDs may have difficulty reading CD-Rs and, because of their lower reflectivity, especially CD-RWs. To the extent that CD hardware can read extended-length discs or CD-RW discs, it is because that hardware has capability beyond the minimum required by the Red Book and Yellow Book standards (the hardware is more capable than it needs to be to bear the Compact Disc logo). CD-R recording systems available in 1990 were similar to the washing machine-sized Meridian CD Publisher, based on the two-piece rack mount Yamaha PDS audio recorder costing $35,000, not including the required external ECC circuitry for data encoding, SCSI hard drive subsystem, and MS-DOS control computer. On July 3, 1991, the first recording of a concert directly to CD was made using a Yamaha YPDR 601. The concert was performed by Claudio Baglioni at the Stadio Flaminio in Rome, Italy. At that time, it was generally anticipated that recordable CDs would have a lifetime of no more than 10 years. However, as of July 2020 the CD from this live recording still plays back with no uncorrectable errors. In the same year, the first company to successfully & professionally duplicate CD-R media was CDRM Recordable Media. With quality technical media being limited from Taiyo Yuden, Early CD-R Media used Phthalocyanine dye for duplication, which has a light aqua color. By 1992, the cost of typical recorders was down to $10,000–12,000, and in September 1995, Hewlett-Packard introduced its model 4020i manufactured by Philips, which, at $995, was the first recorder to cost less than $1000. As of the 2010s, devices capable of writing to CD-Rs and other types of writable CDs could be found under $20. The dye materials developed by Taiyo Yuden made it possible for CD-R discs to be compatible with Audio CD and CD-ROM discs. Music CD-Rs. In the United States, there is a market separation between "music" CD-Rs and "data" CD-Rs, the former being notably more expensive than the latter due to industry copyright arrangements with the RIAA. Specifically, the price of every music CD-R includes a mandatory royalty disbursed to RIAA members by the disc manufacturer; this grants the disc an "application flag" indicating that the royalty has been paid. Consumer standalone music recorders refuse to burn CD-Rs that are missing this flag. Professional CD recorders are not subject to this restriction and can record music to discs with or without the flag. The two types of discs are functionally and physically identical other than this, and computer CD burners can record data and/or music to either. New music CD-Rs are still being manufactured as of the late 2010s, although demand for them has declined as CD-based music recorders have been supplanted by other devices incorporating the same or similar functionality. The groove on the surface of a CD-R disc is not a perfect spiral and contains slight sinusoidal deviations called "wobble". Frequency modulation is used to encode data into the wobble with a carrier frequency of 22.05 kHz. This method of storing information is called Absolute Time in Pregroove (ATIP). Within the ATIP data is a 7-bit field called Disc Application Code containing bits U1 through U7. The first bit, U1, is used to determine if a CD-R is considered a "music" CD-R. Physical characteristics. A standard CD-R is a thick disc made of polycarbonate about 120 mm (5") in diameter. The 120 mm (5") disc has a storage capacity of 74 minutes of audio or 650 Megabytes (MBs) of data. CD-R/RWs are available with capacities of 80 minutes of audio or 737,280,000 bytes (703.125 MiB), which they achieve by molding the disc at the tightest allowable tolerances specified in the Orange Book CD-R/CD-RW standards. The engineering margin that was reserved for manufacturing tolerance has been used for data capacity instead, leaving no tolerance for manufacturing; for these discs to be truly compliant with the Orange Book standard, the manufacturing process must be perfect. Despite the foregoing, most CD-Rs on the market have an 80-minute capacity. There are also 90 minute/790 MB and 99 minute/870 MB discs, although they are less common and depart from the Orange Book standard. Due to the limitations of the data structures in the ATIP, 90 and 99-minute blanks will identify as 80-minute ones. As the ATIP is part of the Orange Book standard, its design does not support some nonstandard disc configurations. In order to use the additional capacity, these discs have to be burned using overburn options in the CD recording software. Overburning itself is so named because it is outside the written standards, but, due to market demand, it has nonetheless become a de facto standard function in most CD writing drives and software for them. Some drives use special techniques, such as Plextor's GigaRec or Sanyo's HD-BURN, to write more data onto a given disc; these techniques are deviations from the compact disc (Red, Yellow, and/or Orange Book) standards, making the recorded discs proprietary-formatted and not fully compatible with standard CD players and drives. In certain applications where discs will not be distributed or exchanged outside a private group and will not be archived for a long time, a proprietary format may be an acceptable way to obtain greater capacity (up to 1.2 GB with GigaRec or 1.8 GB with HD-BURN on 99-minute media). The greatest risk in using such a proprietary data storage format, assuming that it works reliably as designed, is that it may be difficult or impossible to repair or replace the hardware used to read the media if it fails, is damaged, or is lost after its original vendor discontinues it. Nothing in the Red, Yellow, or Orange Book standards prohibits disc reading/writing devices from having the capacity to read/write discs beyond the compact disc standards. The standards do require discs to meet precise requirements in order to be called compact discs, but the other discs may be called by other names; if this were not true, no DVD drive could legally bear the compact disc logo. While disc players and drives may have capabilities beyond the standards, enabling them to read and write nonstandard discs, there is no assurance, in the absence of explicit additional manufacturer specifications beyond normal compact disc logo certification, that any particular player or drive will perform beyond the standards at all or consistently. If the same device with no explicit performance specs beyond the compact disc logo initially handles nonstandard discs reliably, there is no assurance that it will not later stop doing so, and in that case, there is no assurance that it can be made to do so again by service or adjustment. Discs with capacities larger than 650 MB, and especially those larger than 700 MB, are less interchangeable among players/drives than standard discs and are not very suitable for archival use, as their readability on future equipment, or even on the same equipment at a future time, is not assured unless specifically tested and certified in that combination, even under the assumption that the discs will not degrade at all. The polycarbonate disc contains a spiral groove, called the pregroove because it is molded in before data are written to the disc; it guides the laser beam upon writing and reading information. The pregroove is molded into the top side of the polycarbonate disc, where the pits and lands would be molded if it were a pressed, nonrecordable Red Book CD. The bottom side, which faces the laser beam in the player or drive, is flat and smooth. The polycarbonate disc is coated on the pregroove side with a very thin layer of organic dye. Then, on top of the dye is coated a thin, reflecting layer of silver, a silver alloy, or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV light. A blank CD-R is not "empty"; the pregroove has a wobble (the ATIP), which helps the writing laser to stay on track and to write the data to the disc at a constant rate. Maintaining a constant rate is essential to ensure the proper size and spacing of the pits and lands burned into the dye layer. As well as providing timing information, the ATIP (absolute time in pregroove) is also a data track containing information about the CD-R manufacturer, the dye used, and media information (disc length and so on). The pregroove is not destroyed when the data are written to the CD-R, a point which some copy protection schemes use to distinguish copies from an original CD. Dyes. There are three basic formulations of dye used in CD-Rs: There are many hybrid variations of the dye formulations, such as Formazan by Kodak (a hybrid of cyanine and phthalocyanine). Many manufacturers have added additional coloring to disguise their unstable cyanine CD-Rs in the past, so the formulation of a disc cannot be determined based purely on its color. Similarly, a gold reflective layer does not guarantee the use of phthalocyanine dye. The quality of the disc is also not only dependent on the dye used, it is also influenced by sealing, the top layer, the reflective layer, and the polycarbonate. Simply choosing a disc based on its dye type may be problematic. Furthermore, correct power calibration of the laser in the writer, as well as correct timing of the laser pulses, stable disc speed, and so on, is critical to not only the immediate readability but the longevity of the recorded disc, so for archiving it is important to have not only a high-quality disc but a high-quality writer. In fact, a high-quality writer may produce adequate results with medium-quality media, but high-quality media cannot compensate for a mediocre writer, and discs written by such a writer cannot achieve their maximum potential archival lifetime. Speed. These times only include the actual optical writing pass over the disc. For most disc recording operations, additional time is used for overhead processes, such as organizing the files and tracks, which adds to the theoretical minimum total time required to produce a disc. (An exception might be making a disc from a prepared ISO image, for which the overhead would likely be trivial.) At the lowest write speeds, this overhead takes so much less time than the actual disc writing pass that it may be negligible, but at higher write speeds, the overhead time becomes a larger proportion of the overall time taken to produce a finished disc and may add significantly to it. Also, above 20× speed, drives use a Zoned-CLV or CAV strategy, where the advertised maximum speed is only reached near the outer rim of the disc. This is not taken into account by the above table. (If this were not done, the faster rotation that would be required at the inner tracks could cause the disc to fracture and/or could cause excessive vibration which would make accurate and successful writing impossible.) Writing methods. The blank disc has a pre-groove track onto which the data are written. The pre-groove track, which also contains timing information, ensures that the recorder follows the same spiral path as a conventional CD. A CD recorder writes data to a CD-R disc by pulsing its laser to heat areas of the organic dye layer. The writing process does not produce indentations (pits); instead, the heat permanently changes the optical properties of the dye, changing the reflectivity of those areas. Using a low power laser, so as not to further alter the dye, the disc is read back in the same way as a CD-ROM. However, the reflected light is modulated not by pits, but by the alternating regions of heated and unaltered dye. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered ("decoded"). Once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions. A CD recorder can write to a CD-R using several methods including: With careful examination, the written and unwritten areas can be distinguished by the naked eye. CD-Rs are written from the center outwards, so the written area appears as an inner band with slightly different shading. CDs have a Power Calibration Area, used to calibrate the writing laser before and during recording. CDs contain two such areas: one close to the inner edge of the disc, for low-speed calibration, and another on the outer edge on the disc, for high-speed calibration. The calibration results are recorded on a Recording Management Area (RMA) that can hold up to 99 calibrations. The disc cannot be written after the RMA is full, however, the RMA may be emptied in CD-RW discs. Formatting CD-R into CD-ROM. Choosing a "finalize disc" or "close disc" option during a burn setup, it will no longer accept any future writes, and become read-only. Lifespan. Real-life (not accelerated aging) tests have revealed that some CD-Rs degrade quickly even if stored normally. The quality of a CD-R disc has a large and direct influence on longevity—low-quality discs should not be expected to last very long. According to research conducted by J. Perdereau, CD-Rs are expected to have an average life expectancy of 10 years. Branding is not a reliable guide to quality, because many brands (major as well as no name) do not manufacture their own discs. Instead, they are sourced from different manufacturers of varying quality. For best results, the actual manufacturer and material components of each batch of discs should be verified. Burned CD-Rs suffer from material degradation, just like most writable media. CD-R media have an internal layer of dye used to store data. In a CD-RW disc, the recording layer is made of an alloy of silver and other metals—indium, antimony, and tellurium. In CD-R media, the dye itself can degrade, causing data to become unreadable. As well as degradation of the dye, failure of a CD-R can be due to the reflective surface. While silver is less expensive and more widely used, it is more prone to oxidation, resulting in a non-reflecting surface. Gold, on the other hand, although more expensive and no longer widely used, is an inert material, so gold-based CD-Rs do not suffer from this problem. Manufacturers have estimated the longevity of gold-based CD-Rs to be as high as 100 years. By measuring the rate of correctable data errors, the data integrity and/or manufacturing quality of CD-R media can be measured, allowing for a reliable prediction of future data losses caused by media degradation. Labeling. It is recommended if using adhesive-backed paper labels that the labels be specially made for CD-Rs. A balanced CD vibrates only slightly when rotated at high speed. Bad or improperly made labels, or labels applied off-center, unbalance the CD and can cause it to vibrate when it spins, which causes read errors and even risks damaging the drive. A professional alternative to CD labels is pre-printed CDs using a 5-color silkscreen or offset press. Using a permanent marker pen is also a common practice. However, solvents from such pens can affect the dye layer. Disposal. Data confidentiality. Since CD-Rs, in general, cannot be logically erased to any degree, the disposal of CD-Rs presents a possible security issue if they contain sensitive/private data. Destroying the data requires physically destroying the disc or data layer. Heating the disc in a microwave oven for 10–15 seconds effectively destroys the data layer by causing arcing in the metal reflective layer, but this same arcing may cause damage or excessive wear to the microwave oven. Many office paper shredders are also designed to shred CDs. Some recent burners (Plextor, LiteOn) support erase operations on -R media, by "overwriting" the stored data with strong laser power, although the erased area cannot be overwritten with new data. Recycling. The polycarbonate material and possible gold or silver in the reflective layer would make CD-Rs highly recyclable. However, the polycarbonate is of very little value and the quantity of precious metals is so small that it is not profitable to recover them. Consequently, recyclers that accept CD-Rs typically do not offer compensation for donating or transporting the materials.
6781
12331483
https://en.wikipedia.org/wiki?curid=6781
Cytosol
The cytosol, also known as cytoplasmic matrix or groundplasm, is one of the liquids found inside cells (intracellular fluid (ICF)). It is separated into compartments by membranes. For example, the mitochondrial matrix separates the mitochondrion into many compartments. In the eukaryotic cell, the cytosol is surrounded by the cell membrane and is part of the cytoplasm, which also comprises the mitochondria, plastids, and other organelles (but not their internal fluids and structures); the cell nucleus is separate. The cytosol is thus a liquid matrix around the organelles. In prokaryotes, most of the chemical reactions of metabolism take place in the cytosol, while a few take place in membranes or in the periplasmic space. In eukaryotes, while many metabolic pathways still occur in the cytosol, others take place within organelles. The cytosol is a complex mixture of substances dissolved in water. Although water forms the large majority of the cytosol, its structure and properties within cells is not well understood. The concentrations of ions such as sodium and potassium in the cytosol are different to those in the extracellular fluid; these differences in ion levels are important in processes such as osmoregulation, cell signaling, and the generation of action potentials in excitable cells such as endocrine, nerve and muscle cells. The cytosol also contains large amounts of macromolecules, which can alter how molecules behave, through macromolecular crowding. Although it was once thought to be a simple solution of molecules, the cytosol has multiple levels of organization. These include concentration gradients of small molecules such as calcium, large complexes of enzymes that act together and take part in metabolic pathways, and protein complexes such as proteasomes and carboxysomes that enclose and separate parts of the cytosol. Definition. The term "cytosol" was first introduced in 1965 by H. A. Lardy, and initially referred to the liquid that was produced by breaking cells apart and pelleting all the insoluble components by ultracentrifugation. Such a soluble cell extract is not identical to the soluble part of the cell cytoplasm and is usually called a cytoplasmic fraction. The term "cytosol" is now used to refer to the liquid phase of the cytoplasm in an intact cell. This excludes any part of the cytoplasm that is contained within organelles. Due to the possibility of confusion between the use of the word "cytosol" to refer to both extracts of cells and the soluble part of the cytoplasm in intact cells, the phrase "aqueous cytoplasm" has been used to describe the liquid contents of the cytoplasm of living cells. Prior to this, other terms, including hyaloplasm, were used for the cell fluid, not always synonymously, as its nature was not well understood (see protoplasm). Properties and composition. The proportion of cell volume that is cytosol varies: for example while this compartment forms the bulk of cell structure in bacteria, in plant cells the main compartment is the large central vacuole. The cytosol consists mostly of water, dissolved ions, small molecules, and large water-soluble molecules (such as proteins). The majority of these non-protein molecules have a molecular mass of less than . This mixture of small molecules is extraordinarily complex, as the variety of molecules that are involved in metabolism (the metabolites) is immense. For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as "E. coli" and baker's yeast predict that under 1,000 are made. Water. Most of the cytosol is water, which makes up about 70% of the total volume of a typical cell. The pH of the intracellular fluid is 7.4. while mouse cell cytosolic pH ranges between 7.0 and 7.4, and is usually higher if a cell is growing. The viscosity of cytoplasm is roughly the same as pure water, although diffusion of small molecules through this liquid is about fourfold slower than in pure water, due mostly to collisions with the large numbers of macromolecules in the cytosol. Studies in the brine shrimp have examined how water affects cell functions; these saw that a 20% reduction in the amount of water in a cell inhibits metabolism, with metabolism decreasing progressively as the cell dries out and all metabolic activity halting when the water level reaches 70% below normal. Although water is vital for life, the structure of this water in the cytosol is not well understood, mostly because methods such as nuclear magnetic resonance spectroscopy only give information on the average structure of water, and cannot measure local variations at the microscopic scale. Even the structure of pure water is poorly understood, due to the ability of water to form structures such as water clusters through hydrogen bonds. The classic view of water in cells is that about 5% of this water is strongly bound in by solutes or macromolecules as water of solvation, while the majority has the same structure as pure water. This water of solvation is not active in osmosis and may have different solvent properties, so that some dissolved molecules are excluded, while others become concentrated. However, others argue that the effects of the high concentrations of macromolecules in cells extend throughout the cytosol and that water in cells behaves very differently from the water in dilute solutions. These ideas include the proposal that cells contain zones of low and high-density water, which could have widespread effects on the structures and functions of the other parts of the cell. However, the use of advanced nuclear magnetic resonance methods to directly measure the mobility of water in living cells contradicts this idea, as it suggests that 85% of cell water acts like that pure water, while the remainder is less mobile and probably bound to macromolecules. Ions. The concentrations of the other ions in cytosol are quite different from those in extracellular fluid and the cytosol also contains much higher amounts of charged macromolecules such as proteins and nucleic acids than the outside of the cell structure. In contrast to extracellular fluid, cytosol has a high concentration of potassium ions and a low concentration of sodium ions. This difference in ion concentrations is critical for osmoregulation, since if the ion levels were the same inside a cell as outside, water would enter constantly by osmosis - since the levels of macromolecules inside cells are higher than their levels outside. Instead, sodium ions are expelled and potassium ions taken up by the Na⁺/K⁺-ATPase, potassium ions then flow down their concentration gradient through potassium-selection ion channels, this loss of positive charge creates a negative membrane potential. To balance this potential difference, negative chloride ions also exit the cell, through selective chloride channels. The loss of sodium and chloride ions compensates for the osmotic effect of the higher concentration of organic molecules inside the cell. Cells can deal with even larger osmotic changes by accumulating osmoprotectants such as betaines or trehalose in their cytosol. Some of these molecules can allow cells to survive being completely dried out and allow an organism to enter a state of suspended animation called cryptobiosis. In this state the cytosol and osmoprotectants become a glass-like solid that helps stabilize proteins and cell membranes from the damaging effects of desiccation. The low concentration of calcium in the cytosol allows calcium ions to function as a second messenger in calcium signaling. Here, a signal such as a hormone or an action potential opens calcium channel so that calcium floods into the cytosol. This sudden increase in cytosolic calcium activates other signalling molecules, such as calmodulin and protein kinase C. Other ions such as chloride and potassium may also have signaling functions in the cytosol, but these are not well understood. Macromolecules. Protein molecules that do not bind to cell membranes or the cytoskeleton are dissolved in the cytosol. The amount of protein in cells is extremely high, and approaches 200 mg/ml, occupying about 20–30% of the volume of the cytosol. However, measuring precisely how much protein is dissolved in cytosol in intact cells is difficult, since some proteins appear to be weakly associated with membranes or organelles in whole cells and are released into solution upon cell lysis. Indeed, in experiments where the plasma membrane of cells were carefully disrupted using saponin, without damaging the other cell membranes, only about one quarter of cell protein was released. These cells were also able to synthesize proteins if given ATP and amino acids, implying that many of the enzymes in cytosol are bound to the cytoskeleton. However, the idea that the majority of the proteins in cells are tightly bound in a network called the microtrabecular lattice is now seen as unlikely. In prokaryotes the cytosol contains the cell's genome, within a structure known as a nucleoid. This is an irregular mass of DNA and associated proteins that control the transcription and replication of the bacterial chromosome and plasmids. In eukaryotes the genome is held within the cell nucleus, which is separated from the cytosol by nuclear pores that block the free diffusion of any molecule larger than about 10 nanometres in diameter. This high concentration of macromolecules in cytosol causes an effect called macromolecular crowding, which is when the effective concentration of other macromolecules is increased, since they have less volume to move in. This crowding effect can produce large changes in both the rates and the position of chemical equilibrium of reactions in the cytosol. It is particularly important in its ability to alter dissociation constants by favoring the association of macromolecules, such as when multiple proteins come together to form protein complexes, or when DNA-binding proteins bind to their targets in the genome. Organization. Although the components of the cytosol are not separated into regions by cell membranes, these components do not always mix randomly and several levels of organization can localize specific molecules to defined sites within the cytosol. Concentration gradients. Although small molecules diffuse rapidly in the cytosol, concentration gradients can still be produced within this compartment. A well-studied example of these are the "calcium sparks" that are produced for a short period in the region around an open calcium channel. These are about 2 micrometres in diameter and last for only a few milliseconds, although several sparks can merge to form larger gradients, called "calcium waves". Concentration gradients of other small molecules, such as oxygen and adenosine triphosphate may be produced in cells around clusters of mitochondria, although these are less well understood. Protein complexes. Proteins can associate to form protein complexes, these often contain a set of proteins with similar functions, such as enzymes that carry out several steps in the same metabolic pathway. This organization can allow substrate channeling, which is when the product of one enzyme is passed directly to the next enzyme in a pathway without being released into solution. Channeling can make a pathway more rapid and efficient than it would be if the enzymes were randomly distributed in the cytosol, and can also prevent the release of unstable reaction intermediates. Although a wide variety of metabolic pathways involve enzymes that are tightly bound to each other, others may involve more loosely associated complexes that are very difficult to study outside the cell. Consequently, the importance of these complexes for metabolism in general remains unclear. Protein compartments. Some protein complexes contain a large central cavity that is isolated from the remainder of the cytosol. One example of such an enclosed compartment is the proteasome. Here, a set of subunits form a hollow barrel containing proteases that degrade cytosolic proteins. Since these would be damaging if they mixed freely with the remainder of the cytosol, the barrel is capped by a set of regulatory proteins that recognize proteins with a signal directing them for degradation (a ubiquitin tag) and feed them into the proteolytic cavity. Another large class of protein compartments are bacterial microcompartments, which are made of a protein shell that encapsulates various enzymes. These compartments are typically about 100–200 nanometres across and made of interlocking proteins. A well-understood example is the carboxysome, which contains enzymes involved in carbon fixation such as RuBisCO. Biomolecular condensates. Non-membrane bound organelles can form as biomolecular condensates, which arise by clustering, oligomerisation, or polymerisation of macromolecules to drive colloidal phase separation of the cytoplasm or nucleus. Cytoskeletal sieving. Although the cytoskeleton is not part of the cytosol, the presence of this network of filaments restricts the diffusion of large particles in the cell. For example, in several studies tracer particles larger than about 25 nanometres (about the size of a ribosome) were excluded from parts of the cytosol around the edges of the cell and next to the nucleus. These "excluding compartments" may contain a much denser meshwork of actin fibres than the remainder of the cytosol. These microdomains could influence the distribution of large structures such as ribosomes and organelles within the cytosol by excluding them from some areas and concentrating them in others. Function. The cytosol is the site of multiple cell processes. Examples of these processes include signal transduction from the cell membrane to sites within the cell, such as the cell nucleus, or organelles. This compartment is also the site of many of the processes of cytokinesis, after the breakdown of the nuclear membrane in mitosis. Another major function of cytosol is to transport metabolites from their site of production to where they are used. This is relatively simple for water-soluble molecules, such as amino acids, which can diffuse rapidly through the cytosol. However, hydrophobic molecules, such as fatty acids or sterols, can be transported through the cytosol by specific binding proteins, which shuttle these molecules between cell membranes. Molecules taken into the cell by endocytosis or on their way to be secreted can also be transported through the cytosol inside vesicles, which are small spheres of lipids that are moved along the cytoskeleton by motor proteins. The cytosol is the site of most metabolism in prokaryotes, and a large proportion of the metabolism of eukaryotes. For instance, in mammals about half of the proteins in the cell are localized to the cytosol. The most complete data are available in yeast, where metabolic reconstructions indicate that the majority of both metabolic processes and metabolites occur in the cytosol. Major metabolic pathways that occur in the cytosol in animals are protein biosynthesis, the pentose phosphate pathway, glycolysis and gluconeogenesis. The localization of pathways can be different in other organisms, for instance fatty acid synthesis occurs in chloroplasts in plants and in apicoplasts in apicomplexa.
6784
1300302940
https://en.wikipedia.org/wiki?curid=6784
Citizenship
Citizenship is a membership and allegiance to a sovereign state. Though citizenship is often conflated with nationality in today's English-speaking world, international law does not usually use the term "citizenship" to refer to nationality; these two notions are conceptually different dimensions of collective membership. Generally citizenships have no expiration and allow persons to work, reside and vote in the polity, as well as identify with the polity, possibly acquiring a passport. Though through discriminatory laws, like disfranchisement and outright apartheid, citizens have been made second-class citizens. Historically, populations of states were mostly subjects, while citizenship was a particular status which originated in the rights of urban populations, like the rights of the male public of cities and republics, particularly ancient city-states, giving rise to a civitas and the social class of the burgher or bourgeoisie. Since then states have expanded the status of citizenship to most of their national people, with the extent of citizen rights differing between states. Determining factors. A person can be recognized as a citizen on a number of bases. Responsibilities of a citizen. Every citizen has obligations that are enshrined by law and some responsibilities that benefit the community. Obeying the laws of a country and paying taxes are some of the obligations required of citizens by law. Voting and community services form part of responsibilities of a citizen that benefits the community. Before the revolutionary "liberté, égalité, fraternité" was popularized in 1789, the Austria-Hungary dual monarchy established imperial citizenship, potentially for all of its subjects, provided that the tax payer was independent of the local nobility. Legal equality was extended to subjects who were willing to comply in much of the Habsburg empire with the 1811 Civil Code. "Polis". Many thinkers such as Giorgio Agamben in his work extending the biopolitical framework of Foucault's History of Sexuality in the book, "Homo Sacer", point to the concept of citizenship beginning in the early city-states of ancient Greece, although others see it as primarily a modern phenomenon dating back only a few hundred years and, for humanity, that the concept of citizenship arose with the first laws. "Polis" meant both the political assembly of the city-state as well as the entire society. Citizenship concept has generally been identified as a western phenomenon. There is a general view that citizenship in ancient times was a simpler relation than modern forms of citizenship, although this view has come under scrutiny. The relation of citizenship has not been a fixed or static relation but constantly changed within each society, and that according to one view, citizenship might "really have worked" only at select periods during certain times, such as when the Athenian politician Solon made reforms in the early Athenian state. Citizenship was also contingent on a variety of biopolitical assemblages, such as the bioethics of emerging Theo-Philosophical traditions. It was necessary to fit Aristotle's definition of the besouled (the animate) to obtain citizenship: neither the sacred olive tree nor spring would have any rights. An essential part of the framework of Greco-Roman ethics is the figure of "Homo Sacer" or the bare life. Historian Geoffrey Hosking in his 2005 "Modern Scholar" lecture course suggested that citizenship in ancient Greece arose from an appreciation for the importance of freedom. Hosking explained: Slavery permitted slave-owners to have substantial free time and enabled participation in public life. Polis citizenship was marked by exclusivity. Inequality of status was widespread; citizens (πολίτης "politēs" < πόλις 'city') had a higher status than non-citizens, such as women, slaves, and resident foreigners (metics). The first form of citizenship was based on the way people lived in the ancient Greek times, in small-scale organic communities of the polis. The obligations of citizenship were deeply connected to one's everyday life in the polis. These small-scale organic communities were generally seen as a new development in world history, in contrast to the established ancient civilizations of Egypt or Persia, or the hunter-gatherer bands elsewhere. From the viewpoint of the ancient Greeks, a person's public life could not be separated from their private life, and Greeks did not distinguish between the two worlds according to the modern western conception. The obligations of citizenship were deeply connected with everyday life. To be truly human, one had to be an active citizen to the community, which Aristotle famously expressed: "To take no part in the running of the community's affairs is to be either a beast or a god!" This form of citizenship was based on the obligations of citizens towards the community, rather than rights given to the citizens of the community. This was not a problem because they all had a strong affinity with the polis; their own destiny and the destiny of the community were strongly linked. Also, citizens of the polis saw obligations to the community as an opportunity to be virtuous, it was a source of honor and respect. In Athens, citizens were both rulers and ruled, important political and judicial offices were rotated and all citizens had the right to speak and vote in the political assembly. Roman ideas. In the Roman Empire, citizenship expanded from small-scale communities to the entirety of the empire. Romans realized that granting citizenship to people from all over the empire legitimized Roman rule over conquered areas. Roman citizenship was no longer a status of political agency, as it had been reduced to a judicial safeguard and the expression of rule and law. Rome carried forth Greek ideas of citizenship such as the principles of equality under the law, civic participation in government, and notions that "no one citizen should have too much power for too long", but Rome offered relatively generous terms to its captives, including chances for lesser forms of citizenship. If Greek citizenship was an "emancipation from the world of things", the Roman sense increasingly reflected the fact that citizens could act upon material things as well as other citizens, in the sense of buying or selling property, possessions, titles, goods. One historian explained: Roman citizenship reflected a struggle between the upper-class patrician interests against the lower-order working groups known as the plebeian class. A citizen came to be understood as a person "free to act by law, free to ask and expect the law's protection, a citizen of such and such a legal community, of such and such a legal standing in that community". Citizenship meant having rights to have possessions, immunities, expectations, which were "available in many kinds and degrees, available or unavailable to many kinds of person for many kinds of reason". The law itself was a kind of bond uniting people. Roman citizenship was more impersonal, universal, multiform, having different degrees and applications. Middle Ages. During the European Middle Ages, citizenship was usually associated with cities and towns (see medieval commune), and applied mainly to middle-class folk. Titles such as burgher, grand burgher (German "Großbürger") and the bourgeoisie denoted political affiliation and identity in relation to a particular locality, as well as membership in a mercantile or trading class; thus, individuals of respectable means and socioeconomic status were interchangeable with citizens. During this era, members of the nobility had a range of privileges above commoners (see aristocracy), though political upheavals and reforms, beginning most prominently with the French Revolution, abolished privileges and created an egalitarian concept of citizenship. Renaissance. During the Renaissance, people transitioned from being subjects of a king or queen to being citizens of a city and later to a nation. Each city had its own law, courts, and independent administration. And being a citizen often meant being subject to the city's law in addition to having power in some instances to help choose officials. City dwellers who had fought alongside nobles in battles to defend their cities were no longer content with having a subordinate social status but demanded a greater role in the form of citizenship. Membership in guilds was an indirect form of citizenship in that it helped their members succeed financially. The rise of citizenship was linked to the rise of republicanism, according to one account, since independent citizens meant that kings had less power. Citizenship became an idealized, almost abstract, concept, and did not signify a submissive relation with a lord or count, but rather indicated the bond between a person and the state in the rather abstract sense of having rights and duties. Modern times. The modern idea of citizenship still respects the idea of political participation, but it is usually done through elaborate systems of political representation at a distance such as representative democracy. Modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are usually aware of their obligations to authorities and are aware that these bonds often limit what they can do. United States. From 1790 until the mid-twentieth century, United States law used racial criteria to establish citizenship rights and regulate who was eligible to become a naturalized citizen. The Naturalization Act of 1790, the first law in U.S. history to establish rules for citizenship and naturalization, barred citizenship to all people who were not of European descent, stating that "any alien being a free white person, who shall have resided within the limits and under the jurisdiction of the United States for the term of two years, maybe admitted to becoming a citizen thereof." Under early U.S. laws, African Americans were not eligible for citizenship. In 1857, these laws were upheld in the US Supreme Court case "Dred Scott v. Sandford", which ruled that "a free negro of the African race, whose ancestors were brought to this country and sold as slaves, is not a 'citizen' within the meaning of the Constitution of the United States," and that "the special rights and immunities guaranteed to citizens do not apply to them." It was not until the abolition of slavery following the American Civil War that African Americans were granted citizenship rights. The 14th Amendment to the U.S. Constitution, ratified on July 9, 1868, stated that "all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside." Two years later, the Naturalization Act of 1870 would extend the right to become a naturalized citizen to include "aliens of African nativity and to persons of African descent". Despite the gains made by African Americans after the Civil War, Native Americans, Asians, and others not considered "free white persons" were still denied the ability to become citizens. The 1882 Chinese Exclusion Act explicitly denied naturalization rights to all people of Chinese origin, while subsequent acts passed by the US Congress, such as laws in 1906, 1917, and 1924, would include clauses that denied immigration and naturalization rights to people based on broadly defined racial categories. Supreme Court cases such as "Ozawa v. the United States" (1922) and "U.S. v. Bhagat Singh Thind" (1923), would later clarify the meaning of the phrase "free white persons," ruling that ethnically Japanese, Indian, and other non-European people were not "white persons", and were therefore ineligible for naturalization under U.S. law. Native Americans were not granted full US citizenship until the passage of the Indian Citizenship Act in 1924. However, even well into the 1960s, some state laws prevented Native Americans from exercising their full rights as citizens, such as the right to vote. In 1962, New Mexico became the last state to enfranchise Native Americans. It was not until the passage of the Immigration and Nationality Act of 1952 that the racial and gender restrictions for naturalization were explicitly abolished. However, the act still contained restrictions regarding who was eligible for US citizenship and retained a national quota system which limited the number of visas given to immigrants based on their national origin, to be fixed "at a rate of one-sixth of one percent of each nationality's population in the United States in 1920". It was not until the passage of the Immigration and Nationality Act of 1965 that these immigration quota systems were drastically altered in favor of a less discriminatory system. Union of the Soviet Socialist Republics. The 1918 constitution of revolutionary Russia granted citizenship to any foreigners who were living within the Russian Soviet Federative Socialist Republic, so long as they were "engaged in work and [belonged] to the working class." It recognized "the equal rights of all citizens, irrespective of their racial or national connections" and declared oppression of any minority group or race "to be contrary to the fundamental laws of the Republic." The 1918 constitution also established the right to vote and be elected to soviets for both men and women "irrespective of religion, nationality, domicile, etc. [...] who shall have completed their eighteenth year by the day of the election." The later constitutions of the USSR would grant universal Soviet citizenship to the citizens of all member republics in concord with the principles of non-discrimination laid out in the original 1918 constitution of Russia. Nazi Germany. Nazism, the German variant of twentieth-century fascism, classified inhabitants of the country into three main hierarchical categories, each of which would have different rights in relation to the state: citizens, subjects, and aliens. The first category, citizens, were to possess full civic rights and responsibilities. Citizenship was conferred only on males of German (or so-called "Aryan") heritage who had completed military service, and could be revoked at any time by the state. The Reich Citizenship Law of 1935 established racial criteria for citizenship in the German Reich, and because of this law Jews and others who could not "prove German racial heritage" were stripped of their citizenship. The second category, subjects, referred to all others who were born within the nation's boundaries who did not fit the racial criteria for citizenship. Subjects would have no voting rights, could not hold any position within the state, and possessed none of the other rights and civic responsibilities conferred on citizens. All women were to be conferred "subject" status upon birth, and could only obtain "citizen" status if they worked independently or if they married a German citizen (see women in Nazi Germany). The final category, aliens, referred to those who were citizens of another state, who also had no rights. In 2021, the German government passed a law that entitled victims of Nazi persecution and their descendants to become naturalised German citizens. Israel. The primary principles of Israeli citizenship is "jus sanguinis" (citizenship by descent) for Jews and "jus soli" (citizenship by place of birth) for others. India. Indian Citizenship Act, 1955, the first law in Indian history to establish rules for citizenship are "jus soli" (citizenship by place of birth), "jus sanguinis" (citizenship by descent), citizenship by registration, citizenship by naturalization and citizenship by incorporation of territory. Different senses. Many theorists suggest that there are two opposing conceptions of citizenship: an economic one, and a political one. For further information, see History of citizenship. Citizenship status, under social contract theory, carries with it both rights and duties. In this sense, citizenship was described as "a bundle of rights -- primarily, political participation in the life of the community, the right to vote, and the right to receive certain protection from the community, as well as obligations." Citizenship is seen by most scholars as culture-specific, in the sense that the meaning of the term varies considerably from culture to culture, and over time. In China, for example, there is a cultural politics of citizenship which could be called "peopleship", argued by an academic article. How citizenship is understood depends on the person making the determination. The relation of citizenship has never been fixed or static, but constantly changes within each society. While citizenship has varied considerably throughout history, and within societies over time, there are some common elements but they vary considerably as well. As a bond, citizenship extends beyond basic kinship ties to unite people of different genetic backgrounds. It usually signifies membership in a political body. It is often based on or was a result of, some form of military service or expectation of future service. It usually involves some form of political participation, but this can vary from token acts to active service in government. It generally describes a person with legal rights within a given political order. It almost always has an element of exclusion, meaning that some people are not citizens and that this distinction can sometimes be very important, or not important, depending on a particular society. Citizenship as a concept is generally hard to isolate intellectually and compare with related political notions since it relates to many other aspects of society such as the family, military service, the individual, freedom, religion, ideas of right, and wrong, ethnicity, and patterns for how a person should behave in society. When there are many different groups within a nation, citizenship may be the only real bond that unites everybody as equals without discrimination—it is a "broad bond" linking "a person with the state" and gives people a universal identity as a legal member of a specific nation. Modern citizenship has often been looked at as two competing underlying ideas: Responsibilities of citizens Responsibility is an action that individuals of a state or country must take note of in the interest of a common good. These responsibilities can be categorised into personal and civic responsibilities. Scholars suggest that the concept of citizenship contains many unresolved issues, sometimes called tensions, existing within the relation, that continue to reflect uncertainty about what citizenship is supposed to mean. Some unresolved issues regarding citizenship include questions about what is the proper balance between duties and rights. Another is a question about what is the proper balance between political citizenship versus social citizenship. Some thinkers see benefits with people being absent from public affairs, since too much participation such as revolution can be destructive, yet too little participation such as total apathy can be problematic as well. Citizenship can be seen as a special elite status, and it can also be seen as a democratizing force and something that everybody has; the concept can include both senses. According to sociologist Arthur Stinchcombe, citizenship is based on the extent that a person can control one's own destiny within the group in the sense of being able to influence the government of the group. One last distinction within citizenship is the so-called consent descent distinction, and this issue addresses whether citizenship is a fundamental matter determined by a person choosing to belong to a particular nation––by their consent––or is citizenship a matter of where a person was born––that is, by their descent. International. Some intergovernmental organizations have extended the concept and terminology associated with citizenship to the international level, where it is applied to the totality of the citizens of their constituent countries combined. Citizenship at this level is a secondary concept, with rights deriving from national citizenship. European Union. The Maastricht Treaty introduced the concept of citizenship of the European Union. Article 17 (1) of the Treaty on European Union stated that: Citizenship of the Union is hereby established. Every person holding the nationality of a Member State shall be a citizen of the Union. Citizenship of the Union shall be additional to and not replace national citizenship. An agreement is known as the amended EC Treaty established certain minimal rights for European Union citizens. Article 12 of the amended EC Treaty guaranteed a general right of non-discrimination within the scope of the Treaty. Article 18 provided a limited right to free movement and residence in the Member States other than that of which the European Union citizen is a national. Articles 18-21 and 225 provide certain political rights. Union citizens have also extensive rights to move in order to exercise economic activity in any of the Member States which predate the introduction of Union citizenship. Mercosur. Citizenship of the Mercosur is granted to eligible citizens of the Southern Common Market member states. It was approved in 2010 through the Citizenship Statute and should be fully implemented by the member countries in 2021 when the program will be transformed in an international treaty incorporated into the national legal system of the countries, under the concept of "Mercosur Citizen". Commonwealth. The concept of "Commonwealth Citizenship" has been in place ever since the establishment of the Commonwealth of Nations. As with the EU, one holds Commonwealth citizenship only by being a citizen of a Commonwealth member state. This form of citizenship offers certain privileges within some Commonwealth countries: Although Ireland was excluded from the Commonwealth in 1949 because it declared itself a republic, Ireland is generally treated as if it were still a member. Legislation often specifically provides for equal treatment between Commonwealth countries and Ireland and refers to "Commonwealth countries and Ireland". Ireland's citizens are not classified as foreign nationals in the United Kingdom. Canada departed from the principle of nationality being defined in terms of allegiance in 1921. In 1935 the Irish Free State was the first to introduce its own citizenship. However, Irish citizens were still treated as subjects of the Crown, and they are still not regarded as foreign, even though Ireland is not a member of the Commonwealth. The "Canadian Citizenship Act" of 1946 provided for a distinct Canadian Citizenship, automatically conferred upon most individuals born in Canada, with some exceptions, and defined the conditions under which one could become a naturalized citizen. The concept of Commonwealth citizenship was introduced in 1948 in the British Nationality Act 1948. Other dominions adopted this principle such as New Zealand, by way of the British Nationality and New Zealand Citizenship Act 1948. Subnational. Citizenship most usually relates to membership of the nation-state, but the term can also apply at the subnational level. Subnational entities may impose requirements, of residency or otherwise, which permit citizens to participate in the political life of that entity or to enjoy benefits provided by the government of that entity. But in such cases, those eligible are also sometimes seen as "citizens" of the relevant state, province, or region. An example of this is how the fundamental basis of Swiss citizenship is a citizenship of an individual commune, from which follows citizenship of a canton and of the Confederation. Another example is Åland where the residents enjoy special provincial citizenship within Finland, "hembygdsrätt". The United States has a federal system in which a person is a citizen of their specific state of residence, such as New York or California, as well as a citizen of the United States. State constitutions may grant certain rights above and beyond what is granted under the United States Constitution and may impose their own obligations including the sovereign right of taxation and military service; each state maintains at least one military force subject to national militia transfer service, the state's national guard, and some states maintain a second military force not subject to nationalization. Education. "Active citizenship" is the philosophy that citizens should work towards the betterment of their community through economic participation, public, volunteer work, and other such efforts to improve life for all citizens. In this vein, citizenship education is taught in schools, as an academic subject in some countries. By the time children reach secondary education there is an emphasis on such unconventional subjects to be included in an academic curriculum. While the diagram on citizenship to the right is rather facile and depthless, it is simplified to explain the general model of citizenship that is taught to many secondary school pupils. The idea behind this model within education is to instill in young pupils that their actions (i.e. their vote) affect collective citizenship and thus in turn them. Republic of Ireland. It is taught in the Republic of Ireland as an exam subject for the Junior Certificate. It is known as Civic, Social and Political Education (CSPE). A new Leaving Certificate exam subject with the working title 'Politics & Society' is being developed by the National Council for Curriculum and Assessment (NCCA) and is expected to be introduced to the curriculum sometime after 2012. United Kingdom. Citizenship is offered as a General Certificate of Secondary Education (GCSE) course in many schools in the United Kingdom. As well as teaching knowledge about democracy, parliament, government, the justice system, human rights and the UK's relations with the wider world, students participate in active citizenship, often involving a social action or social enterprise in their local community. Criticism. The concept of citizenship is criticized by open borders advocates, who argue that it functions as a caste, feudal, or apartheid system in which people are assigned dramatically different opportunities based on the accident of birth. It is also criticized by some libertarians, especially anarcho-capitalists. In 1987, moral philosopher Joseph Carens argued that "citizenship in Western liberal democracies is the modern equivalent of feudal privilege—an inherited status that greatly enhances one's life chances. Like feudal birthright privileges, restrictive citizenship is hard to justify when one thinks about it closely".
6787
7903804
https://en.wikipedia.org/wiki?curid=6787
Chiapas
Chiapas, officially the Free and Sovereign State of Chiapas, is one of the states that make up the 32 federal entities of Mexico. It comprises 124 municipalities and its capital and largest city is Tuxtla Gutiérrez. Other important population centers in Chiapas include Ocosingo, Tapachula, San Cristóbal de las Casas, Comitán, and Arriaga. Chiapas is the southernmost state in Mexico, and it borders the states of Oaxaca to the west, Veracruz to the northwest, and Tabasco to the north, and the Petén, Quiché, Huehuetenango, and San Marcos departments of Guatemala to the east and southeast. Chiapas has a significant coastline on the Pacific Ocean to the southwest. In general, Chiapas has a humid, tropical climate. In the northern area bordering Tabasco, near Teapa, rainfall can average more than per year. In the past, natural vegetation in this region was lowland, tall perennial rainforest, but this vegetation has been almost completely cleared to allow agriculture and ranching. Rainfall decreases moving towards the Pacific Ocean, but it is still abundant enough to allow the farming of bananas and many other tropical crops near Tapachula. On the several parallel "sierras" or mountain ranges running along the center of Chiapas, the climate can be quite moderate and foggy, allowing the development of cloud forests like those of Reserva de la Biosfera El Triunfo, home to a handful of horned guans, resplendent quetzals, and azure-rumped tanagers. Chiapas is home to the ancient Mayan ruins of Palenque, Yaxchilán, Bonampak, Lacanha, Chinkultic, El Lagartero and Toniná. It is also home to one of the largest indigenous populations in the country, with twelve federally recognized ethnicities. Etymology. The official name of the state is Chiapas, which is believed to have come from the ancient city of Chiapan, which in Náhuatl means "the place where the chia sage grows." After the Spanish arrived (1522), they established two cities called Chiapas de los Indios and Chiapas de los Españoles (1528), with the name of Provincia de Chiapas for the area around the cities. The first coat of arms of the region dates from 1535 as that of the Ciudad Real (San Cristóbal de las Casas). Chiapas painter Javier Vargas Ballinas designed the modern coat of arms. History. Pre-Columbian Era. Hunter gatherers began to occupy the central valley of the state around 7000 BCE, but little is known about them. In the pre Classic period from 1800 BCE to 300 CE, agricultural villages appeared all over the state although hunter gather groups would persist for long after the era. Recent excavations in the Soconusco region of the state indicate that the oldest civilization to appear in what is now modern Chiapas is that of the Mokaya, which were cultivating corn and living in houses as early as 1500 BCE, making them one of the oldest in Mesoamerica. There is speculation that these were the forefathers of the Olmec, migrating across the Grijalva Valley and onto the coastal plain of the Gulf of Mexico to the north, which was Olmec territory. The descendants of Mokaya are the Mixe-Zoque. During the pre Classic era, it is known that most of Chiapas was not Olmec, but had close relations with them, especially the Olmecs of the Isthmus of Tehuantepec. Mayan civilization began in the pre-Classic period as well, but did not come into prominence until the Classic period (300–900 CE). Development of this culture was agricultural villages during the pre-Classic period with city building during the Classic as social stratification became more complex. In Chiapas, Mayan sites are mostly concentrated along the state's borders with Tabasco and Guatemala, near Mayan sites in those entities. Most of this area belongs to the Lacandon Jungle. Mayan civilization in the Lacandon area is marked by rising exploitation of rain forest resources, rigid social stratification, fervent local identity, waging war against neighboring peoples. At its height, it had large cities, a writing system, and development of scientific knowledge, such as mathematics and astronomy. It is not known what ended the Mayan civilization but theories range from over population size, natural disasters, disease, and loss of natural resources through over exploitation or climate change. Nearly all Mayan cities collapsed around the same time, 900 CE. From then until 1500 CE, social organization of the region fragmented into much smaller units and social structure became much less complex. There was some influence from the rising powers of central Mexico but two main indigenous groups emerged during this time, the Zoques and the various Mayan descendants. The Chiapans, for whom the state is named, migrated into the center of the state during this time and settled around Chiapa de Corzo, the old Mixe–Zoque stronghold. There is evidence that the Aztecs appeared in the center of the state around Chiapa de Corza in the 15th century, but were unable to displace the native Chiapa tribe. However, they had enough influence so that the name of this area and of the state would come from Nahuatl. Colonial period. When the Spanish arrived in the 16th century, they found the indigenous peoples divided into Mayan and non-Mayan, with the latter dominated by the Zoques and Chiapanecas. The first contact between Spaniards and the people of Chiapas came in 1522, when Hernán Cortés sent tax collectors to the area after Aztec Empire was subdued. The first military incursion was headed by Luis Marín, who arrived in 1523. After three years, Marín was able to subjugate a number of the local peoples, but met with fierce resistance from the Tzotzils in the highlands. The Spanish colonial government then sent a new expedition under Diego de Mazariegos. Mazariegos had more success than his predecessor, but many natives preferred to commit suicide rather than submit to the Spanish. One famous example of this is the Battle of Tepetchia, where many jumped to their deaths in the Sumidero Canyon. Indigenous resistance was weakened by continual warfare with the Spaniards and disease. By 1530 almost all of the indigenous peoples of the area had been subdued with the exception of the Lacandons in the deep jungles who actively resisted until 1695. However, the main two groups, the Tzotzils and Tzeltals of the central highlands were subdued enough to establish the first Spanish city, today called San Cristóbal de las Casas, in 1528. It was one of two settlements initially called Villa Real de Chiapa de los Españoles and the other called Chiapa de los Indios. The encomienda system that had perpetrated much of the labor-related abuse of the indigenous peoples declined by the end of the 16th century, and was replaced by haciendas. However, the use and misuse of Indian labor remained a large part of Chiapas politics into modern times. Maltreatment and tribute payments created an undercurrent of resentment in the indigenous population that passed on from generation to generation. One uprising against high tribute payments occurred in the Tzeltal communities in the Los Alto region in 1712. Soon, the Tzoltzils and Ch'ols joined the Tzeltales in rebellion, but within a year the government was able to extinguish the rebellion. As of 1778, Thomas Kitchin described Chiapas as "the metropolis of the original Mexicans," with a population of approximately 20,000, and consisting mainly of indigenous peoples. The Spanish introduced new crops such as sugar cane, wheat, barley and indigo as main economic staples along native ones such as corn, cotton, cacao and beans. Livestock such as cattle, horses and sheep were introduced as well. Regions would specialize in certain crops and animals depending on local conditions and for many of these regions, communication and travel were difficult. Most Europeans and their descendants tended to concentrate in cities such as Ciudad Real, Comitán, Chiapa and Tuxtla. Intermixing of the races was prohibited by colonial law but by the end of the 17th century there was a significant mestizo population. Added to this was a population of African slaves brought in by the Spanish in the middle of the 16th century due to the loss of native workforce. Initially, "Chiapas" referred to the first two cities established by the Spanish in what is now the center of the state and the area surrounding them. Two other regions were also established, the Soconusco and Tuxtla, all under the regional colonial government of Guatemala. Chiapas, Soconusco and Tuxla regions were united to the first time as an "intendencia" during the Bourbon Reforms in 1790 as an administrative region under the name of Chiapas. However, within this intendencia, the division between Chiapas and Soconusco regions would remain strong and have consequences at the end of the colonial period. Era of Independence. From the colonial period Chiapas was relatively isolated from the colonial authorities in Mexico City and regional authorities in Guatemala. One reason for this was the rugged terrain. Another was that much of Chiapas was not attractive to the Spanish. It lacked mineral wealth, large areas of arable land, and easy access to markets. This isolation spared it from battles related to Independence. Following the end of Spanish rule in New Spain, it was unclear what new political arrangements would emerge. The isolation of Chiapas from centers of power, along with the strong internal divisions in the intendencia caused a political crisis after the royal government collapsed in Mexico City in 1821, ending the Mexican War of Independence. During this war, a group of influential Chiapas merchants and ranchers sought the establishment of the Free State of Chiapas. This group became known as the "La Familia Chiapaneca". However, this alliance did not last with the lowlands preferring inclusion among the new republics of Central America and the highlands annexation to Mexico. In 1821, a number of cities in Chiapas, starting in Comitán, declared the state's separation from the Spanish empire. In 1823, Guatemala became part of the United Provinces of Central America, which united to form a federal republic that would last from 1823 to 1839. With the exception of the pro-Mexican Ciudad Real (San Cristóbal) and some others, many Chiapanecan towns and villages favored a Chiapas independent of Mexico and some favored unification with Guatemala. Elites in highland cities pushed for incorporation into Mexico. In 1822, then-Emperor Agustín de Iturbide decreed that Chiapas was part of Mexico. In 1823, the Junta General de Gobierno was held and Chiapas declared independence again. In July 1824, the Soconusco District of southwestern Chiapas split off from Chiapas, announcing that it would join the Central American Federation. In September of the same year, a referendum was held on whether the intendencia would join Central America or Mexico, with many of the elite endorsing union with Mexico. This referendum ended in favor of incorporation with Mexico (allegedly through manipulation of the elite in the highlands), but the Soconusco region maintained a neutral status until 1842, when Oaxacans under General Antonio López de Santa Anna occupied the area, and declared it reincorporated into Mexico. Elites of the area would not accept this until 1844. Guatemala would not recognize Mexico's annexation of the Soconusco region until 1895, even though the border between Chiapas and Guatemala had been agreed upon in 1882. The State of Chiapas was officially declared in 1824, with its first constitution in 1826. Ciudad Real was renamed San Cristóbal de las Casas in 1828. In the decades after the official end of the war, the provinces of Chiapas and Soconusco unified, with power concentrated in San Cristóbal de las Casas. The state's society evolved into three distinct spheres: indigenous peoples, mestizos from the farms and haciendas and the Spanish colonial cities. Most of the political struggles were between the last two groups especially over who would control the indigenous labor force. Era of the Liberal Reform. With the ouster of conservative Antonio López de Santa Anna, Mexican liberals came to power. The Reform War (1858–1861) fought between Liberals, who favored federalism and sought economic development, decreased power of the Roman Catholic Church, and Mexican army, and Conservatives, who favored centralized autocratic government, retention of elite privileges, did not lead to any military battles in the state. Despite that it strongly affected Chiapas politics. In Chiapas, the Liberal-Conservative division had its own twist. Much of the division between the highland and lowland ruling families was for whom the Indians should work for and for how long as the main shortage was of labor. These families split into Liberals in the lowlands, who wanted further reform and Conservatives in the highlands who still wanted to keep some of the traditional colonial and church privileges. For most of the early and mid 19th century, Conservatives held most of the power and were concentrated in the larger cities of San Cristóbal de las Casas, Chiapa (de Corzo), Tuxtla and Comitán. As Liberals gained the upper hand nationally in the mid-19th century, one Liberal politician Ángel Albino Corzo gained control of the state. Corzo became the primary exponent of Liberal ideas in the southeast of Mexico and defended the Palenque and Pichucalco areas from annexation by Tabasco. However, Corzo's rule would end in 1875, when he opposed the regime of Porfirio Díaz. Liberal land reforms would have negative effects on the state's indigenous population unlike in other areas of the country. Liberal governments expropriated lands that were previously held by the Spanish Crown and Catholic Church in order to sell them into private hands. This was not only motivated by ideology, but also due to the need to raise money. However, many of these lands had been in a kind of "trust" with the local indigenous populations, who worked them. Liberal reforms took away this arrangement and many of these lands fell into the hands of large landholders who when made the local Indian population work for three to five days a week just for the right to continue to cultivate the lands. This requirement caused many to leave and look for employment elsewhere. Most became "free" workers on other farms, but they were often paid only with food and basic necessities from the farm shop. If this was not enough, these workers became indebted to these same shops and then unable to leave. Porfiriato, 1876–1911. The Porfirio Díaz era at the end of the 19th century and beginning of the 20th was initially thwarted by regional bosses called caciques, bolstered by a wave of Spanish and mestizo farmers who migrated to the state and added to the elite group of wealthy landowning families. There was some technological progress such as a highway from San Cristóbal to the Oaxaca border and the first telephone line in the 1880s, but Porfirian era economic reforms would not begin until 1891 with Governor Emilio Rabasa. This governor took on the local and regional caciques and centralized power into the state capital, which he moved from San Cristóbal de las Casas to Tuxtla in 1892. He modernized public administration, transportation and promoted education. Rabasa also introduced the telegraph, limited public schooling, sanitation and road construction, including a route from San Cristóbal to Tuxtla then Oaxaca, which signaled the beginning of favoritism of development in the central valley over the highlands. He also changed state policies to favor foreign investment, favored large land mass consolidation for the production of cash crops such as henequen, rubber, guayule, cochineal and coffee. Agricultural production boomed, especially coffee, which induced the construction of port facilities in Tonalá. The economic expansion and investment in roads also increased access to tropical commodities such as hardwoods, rubber and chicle. These still required cheap and steady labor to be provided by the indigenous population. By the end of the 19th century, the four main indigenous groups, Tzeltals, Tzotzils, Tojolabals and Ch’ols were living in "reducciones" or reservations, isolated from one another. Conditions on the farms of the Porfirian era was serfdom, as bad if not worse than for other indigenous and mestizo populations leading to the Mexican Revolution. While this coming event would affect the state, Chiapas did not follow the uprisings in other areas that would end the Porfirian era. Early 20th century to 1960. In the early 20th century and into the Mexican Revolution, the production of coffee was particularly important but labor-intensive. This would lead to a practice called "enganche" (hook), where recruiters would lure workers with advanced pay and other incentives such as alcohol and then trap them with debts for travel and other items to be worked off. This practice would lead to a kind of indentured servitude and uprisings in areas of the state, although they never led to large rebel armies as in other parts of Mexico. A small war broke out between Tuxtla Gutiérrez and San Cristobal in 1911. San Cristóbal, allied with San Juan Chamula, tried to regain the state's capital but the effort failed. After three years of peace, the "First Chief" of the revolutionary Constitutionalist forces, Venustiano Carranza, entered in 1914 taking over the government, with the aim of imposing the "Ley de Obreros" (Workers' Law) to address injustices against the state's mostly indigenous workers. Conservatives responded violently months later when they were certain the Carranza forces would take their lands. This was mostly by way of guerrilla actions headed by farm owners who called themselves the "Mapaches". This action continued for six years, until President Carranza was assassinated in 1920 and revolutionary general Álvaro Obregón became president of Mexico. This allowed the Mapaches to gain political power in the state and effectively stop many of the social reforms occurring in other parts of Mexico. The Mapaches continued to fight against socialists and communists in Mexico from 1920 to 1936, to maintain their control over the state. The last of the Mapache resistance was overcome in the early 1930s by Governor Victorico Grajales, who pursued President Lázaro Cárdenas' social and economic policies including persecution of the Catholic Church. These policies would have some success in redistributing lands and organizing indigenous workers but the state would remain relatively isolated for the rest of the 20th century. The territory was reorganized into municipalities in 1916. The current state constitution was written in 1921. There was political stability from the 1940s to the early 1970s; however, regionalism regained with people thinking of themselves as from their local city or municipality over the state. This regionalism impeded the economy as local authorities restrained outside goods. For this reason, construction of highways and communications were pushed to help with economic development. Most of the work was done around Tuxtla Gutiérrez and Tapachula. This included the Sureste railroad connecting northern municipalities such as Pichucalco, Salto de Agua, Palenque, Catazajá and La Libertad. The Cristobal Colon highway linked Tuxtla to the Guatemalan border. Other highways included El Escopetazo to Pichucalco, a highway between San Cristóbal and Palenque with branches to Cuxtepeques and La Frailesca. This helped to integrate the state's economy, but it also permitted the political rise of communal land owners called ejidatarios. Mid-20th century to 1990. In the mid-20th century, the state experienced a significant rise in population, which outstripped local resources, especially land in the highland areas. Since the 1930s, many indigenous and mestizos have migrated from the highland areas into the Lacandon Jungle with the populations of Altamirano, Las Margaritas, Ocosingo and Palenque rising from less than 11,000 in 1920 to over 376,000 in 2000. These migrants came to the jungle area to clear forest and grow crops and raise livestock, especially cattle. Economic development in general raised the output of the state, especially in agriculture, but it had the effect of deforesting many areas, especially the Lacandon. Added to this was there were still serf like conditions for many workers and insufficient educational infrastructure. Population continued to increase faster than the economy could absorb. There were some attempts to resettle peasant farmers onto non cultivated lands, but they were met with resistance. President Gustavo Díaz Ordaz awarded a land grant to the town of Venustiano Carranza in 1967, but that land was already being used by cattle-ranchers who refused to leave. The peasants tried to take over the land anyway, but when violence broke out, they were forcibly removed. In Chiapas poor farmland and severe poverty afflict the Mayan Indians which led to unsuccessful non violent protests and eventually armed struggle started by the Zapatista Army of National Liberation in January 1994. These events began to lead to political crises in the 1970s, with more frequent land invasions and takeovers of municipal halls. This was the beginning of a process that would lead to the emergence of the Zapatista movement in the 1990s. Another important factor to this movement would be the role of the Catholic Church from the 1960s to the 1980s. In 1960, Samuel Ruiz became the bishop of the Diocese of Chiapas, centered in San Cristóbal. He supported and worked with Marist priests and nuns following an ideology called liberation theology. In 1974, he organized a statewide "Indian Congress" with representatives from the Tzeltal, Tzotzil, Tojolabal and Ch'ol peoples from 327 communities as well as Marists and the Maoist People's Union. This congress was the first of its kind with the goal of uniting the indigenous peoples politically. These efforts were also supported by leftist organizations from outside Mexico, especially to form unions of ejido organizations. These unions would later form the base of the EZLN organization. One reason for the Church's efforts to reach out to the indigenous population was that starting in the 1970s, a shift began from traditional Catholic affiliation to Protestant, Evangelical and other Christian sects. The 1980s saw a large wave of refugees coming into the state from Central America as a number of these countries, especially Guatemala, were in the midst of violent political turmoil. The Chiapas/Guatemala border had been relatively porous with people traveling back and forth easily in the 19th and 20th centuries, much like the Mexico/U.S. border around the same time. This is in spite of tensions caused by Mexico's annexation of the Soconusco region in the 19th century. The border between Mexico and Guatemala had been traditionally poorly guarded, due to diplomatic considerations, lack of resources and pressure from landowners who need cheap labor sources. The arrival of thousands of refugees from Central America stressed Mexico's relationship with Guatemala, at one point coming close to war as well as a politically destabilized Chiapas. Although Mexico is not a signatory to the UN Convention Relating to the Status of Refugees, international pressure forced the government to grant official protection to at least some of the refugees. Camps were established in Chiapas and other southern states, and mostly housed Mayan peoples. However, most Central American refugees from that time never received any official status, estimated by church and charity groups at about half a million from El Salvador alone. The Mexican government resisted direct international intervention in the camps, but eventually relented somewhat because of finances. By 1990, it was estimated that there were over 200,000 Guatemalans and half a million from El Salvador, almost all peasant farmers and most under age twenty. In the 1980s, the politization of the indigenous and rural populations of the state that began in the 1960s and 1970s continued. In 1980, several ejido (communal land organizations) joined to form the Union of Ejidal Unions and United Peasants of Chiapas, generally called the Union of Unions, or UU. By 1988, this organization joined with other to form the ARIC-Union of Unions (ARIC-UU) and took over much of the Lacandon Jungle portion of the state. Most of the members of these organization were from Protestant and Evangelical sects as well as "Word of God" Catholics affiliated with the political movements of the Diocese of Chiapas. What they held in common was indigenous identity vis-à-vis the non-indigenous, using the old 19th century "caste war" word "Ladino" for them. Economic liberalization and the EZLN. The adoption of liberal economic reforms by the Mexican federal government clashed with the leftist political ideals of these groups, notably as the reforms were believed to have begun to have negative economic effects on poor farmers, especially small-scale indigenous coffee-growers. Opposition would coalesce into the Zapatista movement in the 1990s. Although the Zapatista movement couched its demands and cast its role in response to contemporary issues, especially in its opposition to neoliberalism, it operates in the tradition of a long line of peasant and indigenous uprisings that have occurred in the state since the colonial era. This is reflected in its indigenous vs. Mestizo character. However, the movement was an economic one as well. Although the area has extensive resources, much of the local population of the state, especially in rural areas, did not benefit from this bounty. In the 1990s, two thirds of the state's residents did not have sewage service, only a third had electricity and half did not have potable water. Over half of the schools offered education only to the third grade and most pupils dropped out by the end of first grade. Grievances, strongest in the San Cristóbal and Lacandon Jungle areas, were taken up by a small leftist guerrilla band led by a man called only "Subcomandante Marcos." This small band, called the Zapatista Army of National Liberation (Ejército Zapatista de Liberación Nacional, EZLN), came to the world's attention when on January 1, 1994 (the day the NAFTA treaty went into effect) EZLN forces occupied and took over the towns of San Cristobal de las Casas, Las Margaritas, Altamirano, Ocosingo and three others. They read their proclamation of revolt to the world and then laid siege to a nearby military base, capturing weapons and releasing many prisoners from the jails. This action followed previous protests in the state in opposition to neoliberal economic policies. Although it has been estimated as having no more than 300 armed guerrilla members, the EZLN paralyzed the Mexican government, which balked at the political risks of direct confrontation. The Bishop of Chiapas, Samuel Ruiz, and the Diocese of Chiapas reacted by offering to mediate between the rebels and authorities. However, because of this diocese's activism since the 1960s, authorities accused the clergy of being involved with the rebels. There was some ambiguity about the relationship between Ruiz and Marcos and it was a constant feature of news coverage, with many in official circles using such to discredit Ruiz. Eventually, the activities of the Zapatistas began to worry the Roman Catholic Church in general and to upstage the diocese's attempts to re establish itself among Chiapan indigenous communities against Protestant evangelization. This would lead to a breach between the Church and the Zapatistas. The Zapatista story remained in headlines for a number of years. One reason for this was the December 1997 massacre of forty-five unarmed Tzotzil peasants, mostly women and children, by a government-backed paramilitary in the Zapatista-controlled village of Acteal in the Chenhaló municipality just north of San Cristóbal. This allowed many media outlets in Mexico to step up their criticisms of the government. The Zapatista movement has had some successes. The agricultural sector of the economy now favors "ejidos" and other commonly-owned land. There have been some other gains economically as well. In the last decades of the 20th century, Chiapas's traditional agricultural economy has diversified somewhat with the construction of more roads and better infrastructure by the federal and state governments. Tourism has become important in some areas of the state, especially in San Cristóbal de las Casas and Palenque. Its economy is important to Mexico as a whole as well, producing coffee, corn, cacao, tobacco, sugar, fruit, vegetables and honey for export. It is also a key state for the nation's petrochemical and hydroelectric industries. A significant percentage of PEMEX's drilling and refining takes place in Chiapas and Tabasco, and Chiapas produces fifty-five percent of Mexico's hydroelectric energy. However, Chiapas remains one of the poorest states in Mexico. Ninety-four of its 111 municipalities have a large percentage of the population living in poverty. In areas such as Ocosingo, Altamirano and Las Margaritas, the towns where the Zapatistas first came into prominence in 1994, 48% of the adults were illiterate. Chiapas is still considered isolated and distant from the rest of Mexico, both culturally and geographically. It has significantly underdeveloped infrastructure compared to the rest of the country, and its significant indigenous population with isolationist tendencies keep the state distinct culturally. Cultural stratification, neglect and lack of investment by the Mexican federal government has exacerbated this problem. Dissolution of the Rebel Zapatista Autonomous Municipalities. In early November 2023, a treaty was signed by rebel Subcomandante Moises and EZLN that announced the dissolution of the Rebel Zapatista Autonomous Municipalities due to the cartel violence generated by Sinaloa Cartel and Jalisco New Generation Cartel and violent border clashes in Guatemala due to the increasing violence growing on the border. "Caracoles" will remain open to locals but remain closed to outsiders, and the previous MAREZ system will be reorganized into a new autonomous system. Geography. Political geography. Chiapas is located in Southeastern Mexico, bordering the states of Tabasco, Veracruz and Oaxaca with the Pacific Ocean to the south and Guatemala to the east. It has a territory of 74,415 km2, the eighth largest state in Mexico. The state consists of 118 municipalities organized into nine political regions called Center, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. There are 18 cities, twelve towns (villas) and 111 pueblos (villages). Major cities include Tuxtla Gutiérrez, San Cristóbal de las Casas, Tapachula, Palenque, Comitán, and Chiapa de Corzo. Geographical regions. The state has a complex geography with seven distinct regions according to the Mullerried classification system. These include the Pacific Coast Plains, the Sierra Madre de Chiapas, the Central Depression, the Central Highlands, the Eastern Mountains, the Northern Mountains and the Gulf Coast Plains. The Pacific Coast Plains is a strip of land parallel to the ocean. It is composed mostly of sediment from the mountains that border it on the northern side. It is uniformly flat, and stretches from the Bernal Mountain south to Tonalá. It has deep salty soils due to its proximity to the sea. It has mostly deciduous rainforest although most has been converted to pasture for cattle and fields for crops. It has numerous estuaries with mangroves and other aquatic vegetation. The Sierra Madre de Chiapas runs parallel to the Pacific coastline of the state, northwest to southeast as a continuation of the Sierra Madre del Sur. This area has the highest altitudes in Chiapas including the Tacaná Volcano, which rises above sea level. Most of these mountains are volcanic in origin although the nucleus is metamorphic rock. It has a wide range of climates but little arable land. It is mostly covered in middle altitude rainforest, high altitude rainforest, and forests of oaks and pines. The mountains partially block rain clouds from the Pacific, a process known as Orographic lift, which creates a particularly rich coastal region called the Soconusco. The main commercial center of the sierra is the town of Motozintla, also near the Guatemalan border. The Central Depression is in the center of the state. It is an extensive semi flat area bordered by the Sierra Madre de Chiapas, the Central Highlands and the Northern Mountains. Within the depression there are a number of distinct valleys. The climate here can be very hot and humid in the summer, especially due to the large volume of rain received in July and August. The original vegetation was lowland deciduous forest with some rainforest of middle altitudes and some oaks above above sea level. The Central Highlands, also referred to as Los Altos, are mountains oriented from northwest to southeast with altitudes ranging from above sea level. The western highlands are displaced faults, while the eastern highlands are mainly folds of sedimentary formationsmainly limestone, shale, and sandstone. These mountains, along the Sierra Madre of Chiapas become the Cuchumatanes where they extend over the border into Guatemala. Its topography is mountainous with many narrow valleys and karst formations called uvalas or poljés, depending on the size. Most of the rock is limestone allowing for a number of formations such as caves and sinkholes. There are also some isolated pockets of volcanic rock with the tallest peaks being the Tzontehuitz and Huitepec volcanos. There are no significant surface water systems as they are almost all underground. The original vegetation was forest of oak and pine but these have been heavily damaged. The highlands climate in the Koeppen modified classification system for Mexico is humid temperate C(m) and subhumid temperate C (w 2 ) (w). This climate exhibits a summer rainy season and a dry winter, with possibilities of frost from December to March. The Central Highlands have been the population center of Chiapas since the Conquest. European epidemics were hindered by the tierra fría climate, allowing the indigenous peoples in the highlands to retain their large numbers. The Eastern Mountains (Montañas del Oriente) are in the east of the state, formed by various parallel mountain chains mostly made of limestone and sandstone. Its altitude varies from . This area receives moisture from the Gulf of Mexico with abundant rainfall and exuberant vegetation, which creates the Lacandon Jungle, one of the most important rainforests in Mexico. The Northern Mountains (Montañas del Norte) are in the north of the state. They separate the flatlands of the Gulf Coast Plains from the Central Depression. Its rock is mostly limestone. These mountains also receive large amounts of rainfall with moisture from the Gulf of Mexico giving it a mostly hot and humid climate with rains year round. In the highest elevations around , temperatures are somewhat cooler and do experience a winter. The terrain is rugged with small valleys whose natural vegetation is high altitude rainforest. The Gulf Coast Plains (Llanura Costera del Golfo) stretch into Chiapas from the state of Tabasco, which gives it the alternate name of the Tabasqueña Plains. These plains are found only in the extreme north of the state. The terrain is flat and prone to flooding during the rainy season as it was built by sediments deposited by rivers and streams heading to the Gulf. Lacandon Jungle. The Lacandon Jungle is situated in north eastern Chiapas, centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. The ecosystem covers an area of approximately extending from Chiapas into northern Guatemala and southern Yucatán Peninsula and into Belize. This area contains as much as 25% of Mexico's total species diversity, most of which has not been researched. It has a predominantly hot and humid climate (Am w" i g) with most rain falling from summer to part of fall, with an average of between 2300 and 2600 mm per year. There is a short dry season from March to May. The predominant wild vegetation is perennial high rainforest. The Lacandon comprises a biosphere reserve (Montes Azules); four natural protected areas (Bonampak, Yaxchilan, Chan Kin, and Lacantum); and the communal reserve (La Cojolita), which functions as a biological corridor with the area of Petén in Guatemala. Flowing within the Rainforest is the Usumacinta River, considered to be one of the largest rivers in Mexico and seventh largest in the world based on volume of water. During the 20th century, the Lacandon has had a dramatic increase in population and along with it, severe deforestation. The population of municipalities in this area, Altamirano, Las Margaritas, Ocosingo and Palenque have risen from 11,000 in 1920 to over 376,000 in 2000. Migrants include Ch'ol, Tzeltal, Tzotzil, Tojolabal indigenous peoples along with mestizos, Guatemalan refugees and others. Most of these migrants are peasant farmers, who cut forest to plant crops. However, the soil of this area cannot support annual crop farming for more than three or four harvests. The increase in population and the need to move on to new lands has pitted migrants against each other, the native Lacandon people, and the various ecological reserves for land. It is estimated that only ten percent of the original Lacandon rainforest in Mexico remains, with the rest strip-mined, logged and farmed. It once stretched over a large part of eastern Chiapas but all that remains is along the northern edge of the Guatemalan border. Of this remaining portion, Mexico is losing over five percent each year. The best preserved portion of the Lacandon is within the Montes Azules Biosphere Reserve. It is centered on what was a commercial logging grant by the Porfirio Díaz government, which the government later nationalized. However, this nationalization and conversion into a reserve has made it one of the most contested lands in Chiapas, with the already existing ejidos and other settlements within the park along with new arrivals squatting on the land. Soconusco. The Soconusco region encompasses a coastal plain and a mountain range with elevations of up to above sea levels paralleling the Pacific Coast. The highest peak in Chiapas is the Tacaná Volcano at above sea level. In accordance with an 1882 treaty, the dividing line between Mexico and Guatemala goes right over the summit of this volcano. The climate is tropical, with a number of rivers and evergreen forests in the mountains. This is Chiapas's major coffee-producing area, as it has the best soils and climate for coffee. Before the arrival of the Spanish, this area was the principal source of cocoa seeds in the Aztec empire, which they used as currency, and for the highly prized quetzal feathers used by the nobility. It would become the first area to produce coffee, introduced by an Italian entrepreneur on the La Chacara farm. Coffee is cultivated on the slopes of these mountains mostly between asl. Mexico produces about 4 million sacks of green coffee each year, fifth in the world behind Brazil, Colombia, Indonesia and Vietnam. Most producers are small with plots of land under . From November to January, the annual crop is harvested and processed employing thousands of seasonal workers. Lately, a number of coffee haciendas have been developing tourism infrastructure as well. Environment and protected areas. Chiapas is located in the tropical belt of the planet, but the climate is moderated in many areas by altitude. For this reason, there are hot, semi-hot, temperate and even cold climates. Some areas have abundant rainfall year-round and others receive most of their rain between May and October, with a dry season from November to April. The mountain areas affect wind and moisture flow over the state, concentrating moisture in certain areas of the state. They also are responsible for some cloud-covered rainforest areas in the Sierra Madre. Chiapas's rainforests are home to thousands of animals and plants, some of which cannot be found anywhere else in the world. Natural vegetation varies from lowland to highland tropical forest, pine and oak forests in the highest altitudes and plains area with some grassland. Chiapas is ranked second in forest resources in Mexico with valued woods such as pine, cypress, "Liquidambar", oak, cedar, mahogany and more. The Lacandon Jungle is one of the last major tropical rainforests in the northern hemisphere with an extension of . It contains about sixty percent of Mexico's tropical tree species, 3,500 species of plants, 1,157 species of invertebrates and over 500 of vertebrate species. Chiapas has one of the greatest diversities in wildlife in the Americas. There are more than 100 species of amphibians, 700 species of birds, fifty of mammals and just over 200 species of reptiles. In the hot lowlands, there are armadillos, monkeys, pelicans, wild boar, jaguars, crocodiles, iguanas and many others. In the temperate regions there are species such as bobcats, salamanders, a large red lizard Abronia lythrochila, weasels, opossums, deer, ocelots and bats. The coastal areas have large quantities of fish, turtles, and crustaceans, with many species in danger of extinction or endangered as they are endemic only to this area. The total biodiversity of the state is estimated at over 50,000 species of plants and animals. The diversity of species is not limited to the hot lowlands. The higher altitudes also have mesophile forests, oak/pine forests in the Los Altos, Northern Mountains and Sierra Madre and the extensive estuaries and mangrove wetlands along the coast. Chiapas has about thirty percent of Mexico's fresh water resources. The Sierra Madre divides them into those that flow to the Pacific and those that flow to the Gulf of Mexico. Most of the first are short rivers and streams; most longer ones flow to the Gulf. Most Pacific side rivers do not drain directly into this ocean but into lagoons and estuaries. The two largest rivers are the Grijalva and the Usumacinta, with both part of the same system. The Grijalva has four dams built on it the Belisario Dominguez (La Angostura); Manuel Moreno Torres (Chicoasén); Nezahualcóyotl (Malpaso); and Angel Albino Corzo (Peñitas). The Usumacinta divides the state from Guatemala and is the longest river in Central America. In total, the state has of surface waters, of coastline, control of of ocean, of estuaries and ten lake systems. Laguna Miramar is a lake in the Montes Azules reserve and the largest in the Lacandon Jungle at 40 km in diameter. The color of its waters varies from indigo to emerald green and in ancient times, there were settlements on its islands and its caves on the shoreline. The Catazajá Lake is 28 km north of the city of Palenque. It is formed by rainwater captured as it makes its way to the Usumacinta River. It contains wildlife such as manatees and iguanas and it is surrounded by rainforest. Fishing on this lake is an ancient tradition and the lake has an annual bass fishing tournament. The Welib Já Waterfall is located on the road between Palenque and Bonampak. The state has thirty-six protected areas at the state and federal levels along with 67 areas protected by various municipalities. The Sumidero Canyon National Park was decreed in 1980 with an extension of . It extends over two of the regions of the state, the Central Depression and the Central Highlands over the municipalities of Tuxtla Gutiérrez, Nuevo Usumacinta, Chiapa de Corzo and San Fernando. The canyon has steep and vertical sides that rise to up to 1000 meters from the river below with mostly tropical rainforest but some areas with xerophile vegetation such as cactus can be found. The river below, which has cut the canyon over the course of twelve million years, is called the Grijalva. The canyon is emblematic for the state as it is featured in the state seal. The Sumidero Canyon was once the site of a battle between the Spaniards and Chiapanecan Indians. Many Chiapanecans chose to throw themselves from the high edges of the canyon rather than be defeated by Spanish forces. Today, the canyon is a popular destination for ecotourism. Visitors can take boat trips down the river that runs through the canyon and see the area's many birds and abundant vegetation. The Montes Azules Biosphere Reserve was decreed in 1978. It is located in the northeast of the state in the Lacandon Jungle. It covers in the municipalities of Maravilla Tenejapa, Ocosingo and Las Margaritas. It conserves highland perennial rainforest. The jungle is in the Usumacinta River basin east of the Chiapas Highlands. It is recognized by the United Nations Environment Programme for its global biological and cultural significance. In 1992, the Lacantun Reserve, which includes the Classic Maya archaeological sites of Yaxchilan and Bonampak, was added to the biosphere reserve. Agua Azul Waterfall Protection Area is in the Northern Mountains in the municipality of Tumbalá. It covers an area of of rainforest and pine-oak forest, centered on the waterfalls it is named after. It is located in an area locally called the "Mountains of Water", as many rivers flow through there on their way to the Gulf of Mexico. The rugged terrain encourages waterfalls with large pools at the bottom, that the falling water has carved into the sedimentary rock and limestone. Agua Azul is one of the best known in the state. The waters of the Agua Azul River emerge from a cave that forms a natural bridge of thirty meters and five small waterfalls in succession, all with pools of water at the bottom. In addition to Agua Azul, the area has other attractions—such as the Shumuljá River, which contains rapids and waterfalls, the Misol Há Waterfall with a thirty-meter drop, the Bolón Ajau Waterfall with a fourteen-meter drop, the Gallito Copetón rapids, the Blacquiazules Waterfalls, and a section of calm water called the Agua Clara. The El Ocote Biosphere Reserve was decreed in 1982 located in the Northern Mountains at the boundary with the Sierra Madre del Sur in the municipalities of Ocozocoautla, Cintalapa and Tecpatán. It has a surface area of and preserves a rainforest area with karst formations. The Lagunas de Montebello National Park was decreed in 1959 and consists of near the Guatemalan border in the municipalities of La Independencia and La Trinitaria. It contains two of the most threatened ecosystems in Mexico the "cloud rainforest" and the Soconusco rainforest. The El Triunfo Biosphere Reserve, decreed in 1990, is located in the Sierra Madre de Chiapas in the municipalities of Acacoyagua, Ángel Albino Corzo, Montecristo de Guerrero, La Concordia, Mapastepec, Pijijiapan, Siltepec and Villa Corzo near the Pacific Ocean with . It conserves areas of tropical rainforest and many freshwater systems endemic to Central America. It is home to around 400 species of birds including several rare species such as the horned guan, the quetzal and the azure-rumped tanager. The Palenque National Forest is centered on the archaeological site of the same name and was decreed in 1981. It is located in the municipality of Palenque where the Northern Mountains meet the Gulf Coast Plain. It extends over of tropical rainforest. The Laguna Bélgica Conservation Zone is located in the north west of the state in the municipality of Ocozocoautla. It covers forty-two hectares centered on the Bélgica Lake. The El Zapotal Ecological Center was established in 1980. Nahá–Metzabok is an area in the Lacandon Forest whose name means "place of the black lord" in Nahuatl. It extends over and in 2010, it was included in the World Network of Biosphere Reserves. Two main communities in the area are called Nahá and Metzabok. They were established in the 1940s, but the oldest communities in the area belong to the Lacandon people. The area has large numbers of wildlife including endangered species such as eagles, quetzals and jaguars. Demographics. General statistics. As of 2010, the population is 4,796,580, the eighth most populous state in Mexico. The 20th century saw large population growth in Chiapas. From fewer than one million inhabitants in 1940, the state had about two million in 1980, and over 4 million in 2005. Overcrowded land in the highlands was relieved when the rainforest to the east was subject to land reform. Cattle ranchers, loggers, and subsistence farmers migrated to the rain forest area. The population of the Lacandon was only one thousand people in 1950, but by the mid-1990s this had increased to 200 thousand. As of 2010, 78% lives in urban communities with 22% in rural communities. While birthrates are still high in the state, they have come down in recent decades from 7.4 per woman in 1950. However, these rates still mean significant population growth in raw numbers. About half of the state's population is under age 20, with an average age of 19. In 2005, there were 924,967 households, 81% headed by men and the rest by women. Most households were nuclear families (70.7%) with 22.1% consisting of extended families. More migrate out of Chiapas than migrate in, with emigrants leaving for Tabasco, Oaxaca, Veracruz, State of Mexico and the Federal District (Mexico City) primarily. While Catholics remain the majority, their numbers have dropped as many have converted to Protestant denominations in recent decades. Islam is also a small but growing religion due to the Indigenous Muslims as well as Muslim immigrants from Africa continuously rising in numbers. The National Presbyterian Church in Mexico has a large following in Chiapas; some estimate that 40% of the population are followers of the Presbyterian church. There are a number of people in the state with African features. These are the descendants of slaves brought to the state in the 16th century. There are also those with predominantly European features who are the descendants of the original Spanish colonizers as well as later immigrants to Mexico. The latter mostly came at the end of the 19th and early 20th century under the Porfirio Díaz regime to start plantations. According to the 2020 Census, 1.02% of Chiapas's population identified as Black, Afro-Mexican, or of African descent. Indigenous population. Numbers and influence. Over the history of Chiapas, there have been three main indigenous groups: the Mixes-Zoques, the Mayas and the Chiapas. Today, there are an estimated fifty-six linguistic groups. As of the 2005 Census, there were 957,255 people who spoke an indigenous language out of a total population of about 3.5 million. Of this one million, one third do not speak Spanish. Out of Chiapas's 111 municipios, 99 have majority indigenous populations. 22 municipalities have indigenous populations over 90%, and 36 municipalities have native populations exceeding 50%. However, despite population growth in indigenous villages, the percentage of indigenous to non indigenous continues to fall with less than 35% indigenous. Indian populations are concentrated in a few areas, with the largest concentration of indigenous-language-speaking individuals is living in 5 of Chiapas's 9 economic regions: Los Altos, Selva, Norte, Fronteriza, and Sierra. The remaining three regions, Soconusco, Centro and Costa, have populations that are considered to be predominantly mestizo. The state has about 13.5% of all of Mexico's indigenous population, and it has been ranked among the ten "most indianized" states, with only Campeche, Oaxaca, Quintana Roo and Yucatán having been ranked above it between 1930 and the present. These indigenous peoples have been historically resistant to assimilation into the broader Mexican society, with it best seen in the retention rates of indigenous languages and the historic demands for autonomy over geographic areas as well as cultural domains. Much of the latter has been prominent since the Zapatista uprising in 1994. Most of Chiapas's indigenous groups are descended from the Mayans, speaking languages that are closely related to one another, belonging to the Western Maya language group. The state was part of a large region dominated by the Mayans during the Classic period. The most numerous of these Mayan groups include the Tzeltal, Tzotzil, Ch'ol, Zoque, Tojolabal, Lacandon and Mam, which have traits in common such as syncretic religious practices, and social structure based on kinship. The most common Western Maya languages are Tzeltal and Tzotzil along with Chontal, Ch’ol, Tojolabal, Chuj, Kanjobal, Acatec, Jacaltec and Motozintlec. 12 of Mexico's officially recognized native peoples living in the state have conserved their language, customs, history, dress and traditions to a significant degree. The primary groups include the Tzeltal, Tzotzil, Ch'ol, Tojolabal, Zoque, Chuj, Kanjobal, Mam, Jakaltek, Mocho', Akatek, Kaqchikel and Lacandon. Most indigenous communities are found in the municipalities of the Centro, Altos, Norte and Selva regions, with many having indigenous populations of over fifty percent. These include Bochil, Sitalá, Pantepec, Simojovel to those with over ninety percent indigenous such as San Juan Cancuc, Huixtán, Tenejapa, Tila, Oxchuc, Tapalapa, Zinacantán, Mitontic, Ocotepec, Chamula, and Chalchihuitán. The most numerous indigenous communities are the Tzeltal and Tzotzil peoples, who number about 400,000 each, together accounting for about half of the state's indigenous population. The next most numerous are the Ch’ol with about 200,000 people and the Tojolabal and Zoques, who number about 50,000 each. The top 3 municipalities in Chiapas with indigenous language speakers three years of age and older are: Ocosingo (133,811), Chilon (96,567), and San Juan Chamula (69,475). These 3 municipalities accounted for 24.8% (299,853) of all indigenous language speakers three years or older in the state of Chiapas, out of a total of 1,209,057 indigenous language speakers three years or older. Although most indigenous language speakers are bilingual, especially in the younger generations, many of these languages have shown resilience. Four of Chiapas's indigenous languages, Tzeltal, Tzotzil, Tojolabal and Chol, are high-vitality languages, meaning that a high percentage of these ethnicities speak the language and that there is a high rate of monolingualism in it. It is used in over 80% of homes. Zoque is considered to be of medium-vitality with a rate of bilingualism of over 70% and home use somewhere between 65% and 80%. Maya is considered to be of low-vitality with almost all of its speakers bilingual with Spanish. The most spoken indigenous languages as of 2010 are Tzeltal with 461,236 speakers, Tzotzil with 417,462, Ch’ol with 191,947 and Zoque with 53,839. In total, there are 1,141,499 who speak an indigenous language or 27% of the total population. Of these, 14% do not speak Spanish. Studies done between 1930 and 2000 have indicated that Spanish is not dramatically displacing these languages. In raw number, speakers of these languages are increasing, especially among groups with a long history of resistance to Spanish/Mexican domination. Language maintenance has been strongest in areas related to where the Zapatista uprising took place such as the municipalities of Altamirano, Chamula, Chanal, Larráinzar, Las Margaritas, Ocosingo, Palenque, Sabanilla, San Cristóbal de Las Casas and Simojovel. The state's rich indigenous tradition along with its associated political uprisings, especially that of 1994, has great interest from other parts of Mexico and abroad. It has been especially appealing to a variety of academics including many anthropologists, archeologists, historians, psychologists and sociologists. The concept of "mestizo" or mixed indigenous European heritage became important to Mexico's identity by the time of Independence, but Chiapas has kept its indigenous identity to the present day. Since the 1970s, this has been supported by the Mexican government as it has shifted from cultural policies that favor a "multicultural" identity for the country. One major exception to the separatist, indigenous identity has been the case of the Chiapa people, from whom the state's name comes, who have mostly been assimilated and intermarried into the mestizo population. Most Indigenous communities have economies based primarily on traditional agriculture such as the cultivation and processing of corn, beans and coffee as a cash crop and in the last decade, many have begun producing sugarcane and jatropha for refinement into biodiesel and ethanol for automobile fuel. The raising of livestock, particularly chicken and turkey and to a lesser extent beef and farmed fish is also a major economic activity. Many indigenous people, in particular the Maya, are employed in the production of traditional clothing, fabrics, textiles, wood items, artworks and traditional goods such as jade and amber works. Tourism has provided a number of a these communities with markets for their handcrafts and works, some of which are very profitable. San Cristóbal de las Casas and San Juan Chamula maintain a strong indigenous identity. On market day, many indigenous people from rural areas come into San Cristóbal to buy and sell mostly items for everyday use such as fruit, vegetables, animals, cloth, consumer goods and tools. San Juan Chamula is considered to be a center of indigenous culture, especially its elaborate festivals of Carnival and Day of Saint John. It was common for politicians, especially during Institutional Revolutionary Party's dominance to visit here during election campaigns and dress in indigenous clothing and carry a carved walking stick, a traditional sign of power. Relations between the indigenous ethnic groups is complicated. While there has been inter-ethnic political activism such as that promoted by the Diocese of Chiapas in the 1970s and the Zapatista movement in the 1990s, there has been inter-indigenous conflict as well. Much of this has been based on religion, pitting those of the traditional Catholic/indigenous beliefs who support the traditional power structure against Protestants, Evangelicals and Word of God Catholics (directly allied with the Diocese) who tend to oppose it. This is particularly significant problem among the Tzeltals and Tzotzils. Starting in the 1970s, traditional leaders in San Juan Chamula began expelling dissidents from their homes and land, amounting to about 20,000 indigenous forced to leave over a thirty-year period. It continues to be a serious social problem although authorities downplay it. Recently there has been political, social and ethnic conflict between the Tzotzil who are more urbanized and have a significant number of Protestant practitioners and the Tzeltal who are predominantly Catholic and live in smaller farming communities. Many Protestant Tzotzil have accused the Tzeltal of ethnic discrimination and intimidation due to their religious beliefs and the Tzeltal have in return accused the Tzotzil of singling them out for discrimination. Clothing, especially women's clothing, varies by indigenous group. For example, women in Ocosingo tend to wear a blouse with a round collar embroidered with flowers and a black skirt decorated with ribbons and tied with a cloth belt. The Lacandon people tend to wear a simple white tunic. They also make a ceremonial tunic from bark, decorated with astronomy symbols. In Tenejapa, women wear a huipil embroidered with Mayan fretwork along with a black wool rebozo. Men wear short pants, embroidered at the bottom. Tzeltals. The Tzeltals call themselves Winik atel, which means "working men." This is the largest ethnicity in the state, mostly living southeast of San Cristóbal with the largest number in Amatenango. Today, there are about 500,000 Tzeltal Indians in Chiapas. Tzeltal Mayan, part of the Mayan language family, today is spoken by about 375,000 people making it the fourth-largest language group in Mexico. There are two main dialects; highland (or Oxchuc) and lowland (or Bachajonteco). This language, along with Tzotzil, is from the Tzeltalan subdivision of the Mayan language family. Lexico-statistical studies indicate that these two languages probably became differentiated from one another around 1200 Most children are bilingual in the language and Spanish although many of their grandparents are monolingual Tzeltal speakers. Each Tzeltal community constitutes a distinct social and cultural unit with its own well-defined lands, wearing apparel, kinship system, politico-religious organization, economic resources, crafts, and other cultural features. Women are distinguished by a black skirt with a wool belt and an undyed cotton bloused embroidered with flowers. Their hair is tied with ribbons and covered with a cloth. Most men do not use traditional attire. Agriculture is the basic economic activity of the Tzeltal people. Traditional Mesoamerican crops such as maize, beans, squash, and chili peppers are the most important, but a variety of other crops, including wheat, manioc, sweet potatoes, cotton, chayote, some fruits, other vegetables, and coffee. Tzotzils. Tzotzil speakers number just slightly less than theTzeltals at 226,000, although those of the ethnicity are probably higher. Tzotzils are found in the highlands or Los Altos and spread out towards the northeast near the border with Tabasco. However, Tzotzil communities can be found in almost every municipality of the state. They are concentrated in Chamula, Zinacantán, Chenalhó, and Simojovel. Their language is closely related to Tzeltal and distantly related to Yucatec Mayan and Lacandon. Men dress in short pants tied with a red cotton belt and a shirt that hangs down to their knees. They also wear leather huaraches and a hat decorated with ribbons. The women wear a red or blue skirt, a short huipil as a blouse, and use a chal or rebozo to carry babies and bundles. Tzotzil communities are governed by a katinab who is selected for life by the leaders of each neighborhood. The Tzotzils are also known for their continued use of the temazcal for hygiene and medicinal purposes. Ch’ols. The Ch’ols of Chiapas migrated to the northwest of the state starting about 2,000 years ago, when they were concentrated in Guatemala and Honduras. Those Ch’ols who remained in the south are distinguished by the name Chortís. Chiapas Ch’ols are closely related to the Chontal in Tabasco as well. Choles are found in Tila, Tumbalá, Sabanilla, Palenque, and Salto de Agua, with an estimated population of about 115,000 people. The Ch’ol language belongs to the Maya family and is related to Tzeltal, Tzotzil, Lacandon, Tojolabal, and Yucatec Mayan. There are three varieties of Chol (spoken in Tila, Tumbalá, and Sabanilla), all mutually intelligible. Over half of speakers are monolingual in the Chol language. Women wear a long navy blue or black skirt with a white blouse heavily embroidered with bright colors and a sash with a red ribbon. The men only occasionally use traditional dress for events such as the feast of the Virgin of Guadalupe. This dress usually includes pants, shirts and huipils made of undyed cotton, with leather huaraches, a carrying sack and a hat. The fundamental economic activity of the Ch’ols is agriculture. They primarily cultivate corn and beans, as well as sugar cane, rice, coffee, and some fruits. They have Catholic beliefs strongly influenced by native ones. Harvests are celebrated on the Feast of Saint Rose on 30 August. Tojolabals. The Totolabals are estimated at 35,000 in the highlands. According to oral tradition, the Tojolabales came north from Guatemala. The largest community is Ingeniero González de León in the La Cañada region, an hour outside the municipal seat of Las Margaritas. Tojolabales are also found in Comitán, Trinitaria, Altamirano and La Independencia. This area is filled with rolling hills with a temperate and moist climate. There are fast moving rivers and jungle vegetation. Tojolabal is related to Kanjobal, but also to Tzeltal and Tzotzil. However, most of the youngest of this ethnicity speak Spanish. Women dress traditionally from childhood with brightly colored skirts decorated with lace or ribbons and a blouse decorated with small ribbons, and they cover their heads with kerchiefs. They embroider many of their own clothes but do not sell them. Married women arrange their hair in two braids and single women wear it loose decorated with ribbons. Men no longer wear traditional garb daily as it is considered too expensive to make. Zoques. The Zoques are found in 3,000 square kilometers the center and west of the state scattered among hundreds of communities. These were one of the first native peoples of Chiapas, with archeological ruins tied to them dating back as far as 3500 BCE. Their language is not Mayan but rather related to Mixe, which is found in Oaxaca and Veracruz. By the time the Spanish arrived, they had been reduced in number and territory. Their ancient capital was Quechula, which was covered with water by the creation of the Malpaso Dam, along with the ruins of Guelegas, which was first buried by an eruption of the Chichonal volcano. There are still Zoque ruins at Janepaguay, the Ocozocuautla and La Ciénega valleys. Lacandons. The Lacandons are one of the smallest native indigenous groups of the state with a population estimated between 600 and 1,000. They are mostly located in the communities of Lacanjá Chansayab, Najá, and Mensabak in the Lacandon Jungle. They live near the ruins of Bonampak and Yaxchilan and local lore states that the gods resided here when they lived on Earth. They inhabit about a million hectares of rainforest but from the 16th century to the present, migrants have taken over the area, most of which are indigenous from other areas of Chiapas. This dramatically altered their lifestyle and worldview. Traditional Lacandon shelters are huts made with fonds and wood with an earthen floor, but this has mostly given way to modern structures. Mochós. The Mochós or Motozintlecos are concentrated in the municipality of Motozintla on the Guatemalan border. According to anthropologists, these people are an "urban" ethnicity as they are mostly found in the neighborhoods of the municipal seat. Other communities can be found near the Tacaná volcano, and in the municipalities of Tuzantán and Belisario Dominguez. The name "Mochó" comes from a response many gave the Spanish whom they could not understand and means "I don't know." This community is in the process of disappearing as their numbers shrink. Mams. The Mams are a Mayan ethnicity that numbers about 20,000 found in thirty municipalities, especially Tapachula, Motozintla, El Porvenir, Cacahoatán and Amatenango in the southeastern Sierra Madre of Chiapas. The Mame language is one of the most ancient Mayan languages with 5,450 Mame speakers were tallied in Chiapas in the 2000 census. These people first migrated to the border region between Chiapas and Guatemala at the end of the nineteenth century, establishing scattered settlements. In the 1960s, several hundred migrated to the Lacandon rain forest near the confluence of the Santo Domingo and Jataté Rivers. Those who live in Chiapas are referred to locally as the "Mexican Mam (or Mame)" to differentiate them from those in Guatemala. Most live around the Tacaná volcano, which the Mams call "our mother" as it is considered to be the source of the fertility of the area's fields. The masculine deity is the Tajumulco volcano, which is in Guatemala. Guatemalan migrant groups. In the last decades of the 20th century, Chiapas received a large number of indigenous refugees, especially from Guatemala, many of whom remain in the state. These have added ethnicities such as the Kekchi, Chuj, Ixil, Kanjobal, K'iche' and Cakchikel to the population. The Kanjobal mainly live along the border between Chiapas and Guatemala, with almost 5,800 speakers of the language tallied in the 2000 census. It is believed that a significant number of these Kanjobal-speakers may have been born in Guatemala and immigrated to Chiapas, maintaining strong cultural ties to the neighboring nation. Economy. Economic indicators. Chiapas accounts for 1.73% of Mexico's GDP. The primary sector, agriculture, produces 15.2% of the state's GDP. The secondary sector, mostly energy production, but also commerce, services and tourism, accounts for 21.8%. The share of the GDP coming from services is rising while that of agriculture is falling. The state is divided into nine economic regions. These regions were established in the 1980s in order to facilitate statewide economic planning. Many of these regions are based on state and federal highway systems. These include Centro, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. Despite being rich in resources, Chiapas, along with Oaxaca and Guerrero, lags behind the rest of the country in almost all socioeconomic indicators. , there were 889,420 residential units; 71% had running water, 77.3% sewerage, and 93.6% electricity. Construction of these units varies from modern construction of block and concrete to those constructed of wood and laminate. Because of its high rate of economic marginalization, more people migrate from Chiapas than migrate to it. Most of its socioeconomic indicators are the lowest in the country including income, education, health and housing. It has a significantly higher percentage of illiteracy than the rest of the country, although that situation has improved since the 1970s when over 45% were illiterate and 1980s, about 32%. The tropical climate presents health challenges, with most illnesses related to the gastro-intestinal tract and parasites. As of 2005, the state has 1,138 medical facilities: 1098 outpatient and 40 inpatient. Most are run by IMSS and ISSSTE and other government agencies. The implementation of NAFTA had negative effects on the economy, particularly by lowering prices for agricultural products. It made the southern states of Mexico poorer in comparison to those in the north, with over 90% of the poorest municipalities in the south of the country. As of 2006, 31.8% work in communal services, social services and personal services. 18.4% work in financial services, insurance and real estate, 10.7% work in commerce, restaurants and hotels, 9.8% work in construction, 8.9% in utilities, 7.8% in transportation, 3.4% in industry (excluding handcrafts), and 8.4% in agriculture. Although until the 1960s, many indigenous communities were considered by scholars to be autonomous and economically isolated, this was never the case. Economic conditions began forcing many to migrate to work, especially in agriculture for non-indigenous. However, unlike many other migrant workers, most indigenous in Chiapas have remained strongly tied to their home communities. A study as early as the 1970s showed that 77 percent of heads of household migrated outside of the Chamula municipality as local land did not produce sufficiently to support families. In the 1970s, cuts in the price of corn forced many large landowners to convert their fields into pasture for cattle, displacing many hired laborers, cattle required less work. These agricultural laborers began to work for the government on infrastructure projects financed by oil revenue. It is estimated that in the 1980s to 1990s as many as 100,000 indigenous people moved from the mountain areas into cities in Chiapas, with some moving out of the state to Mexico City, Cancún and Villahermosa in search of employment. Agriculture, livestock, forestry and fishing. Agriculture, livestock, forestry and fishing employ over 53% of the state's population; however, its productivity is considered to be low. Agriculture includes both seasonal and perennial plants. Major crops include corn, beans, sorghum, soybeans, peanuts, sesame seeds, coffee, cacao, sugar cane, mangos, bananas, and palm oil. These crops take up 95% of the cultivated land in the state and 90% of the agricultural production. Only four percent of fields are irrigated with the rest dependent on rainfall either seasonally or year round. Chiapas ranks second among the Mexican states in the production of cacao, the product used to make chocolate, and is responsible for about 60 percent of Mexico's total coffee output. The production of bananas, cacao and corn make Chiapas Mexico's second largest agricultural producer overall. Coffee is the state's most important cash crop with a history from the 19th century. The crop was introduced in 1846 by Jeronimo Manchinelli who brought 1,500 seedlings from Guatemala on his farm La Chacara. This was followed by a number of other farms as well. Coffee production intensified during the regime of Porfirio Díaz and the Europeans who came to own many of the large farms in the area. By 1892, there were 22 coffee farms in the region, among them Nueva Alemania, Hamburgo, Chiripa, Irlanda, Argovia, San Francisco, and Linda Vista in the Soconusco region. Since then coffee production has grown and diversified to include large plantations, the use and free and forced labor and a significant sector of small producers. While most coffee is grown in the Soconusco, other areas grow it, including the municipalities of Oxchuc, Pantheló, El Bosque, Tenejapa, Chenalhó, Larráinzar, and Chalchihuitán, with around six thousand producers. It also includes organic coffee producers with 18 million tons grown annually 60,000 producers. One third of these producers are indigenous women and other peasant farmers who grow the coffee under the shade of native trees without the use of agro chemicals. Some of this coffee is even grown in environmentally protected areas such as the El Triunfo reserve, where ejidos with 14,000 people grow the coffee and sell it to cooperativers who sell it to companies such as Starbucks, but the main market is Europe. Some growers have created cooperatives of their own to cut out the middleman. Ranching occupies about three million hectares of natural and induced pasture, with about 52% of all pasture induced. Most livestock is done by families using traditional methods. Most important are meat and dairy cattle, followed by pigs and domestic fowl. These three account for 93% of the value of production. Annual milk production in Chiapas totals about 180 million liters per year. The state's cattle production, along with timber from the Lacandon Jungle and energy output gives it a certain amount of economic clouts compared to other states in the region. Forestry is mostly based on conifers and common tropical species producing 186,858 m3 per year at a value of 54,511,000 pesos. Exploited non-wood species include the Camedor palm tree for its fronds. The fishing industry is underdeveloped but includes the capture of wild species as well as fish farming. Fish production is generated both from the ocean as well as the many freshwater rivers and lakes. In 2002, 28,582 tons of fish valued at 441.2 million pesos was produced. Species include tuna, shark, shrimp, mojarra and crab. Industry and energy. The state's abundant rivers and streams have been dammed to provide about fifty-five percent of the country's hydroelectric energy. Much of this is sent to other states accounting for over six percent of all of Mexico's energy output. Main power stations are located at Malpaso, La Angostura, Chicoasén and Peñitas, which produce about eight percent of Mexico's hydroelectric energy. Manuel Moreno Torres plant on the Grijalva River the most productive in Mexico. All of the hydroelectric plants are owned and operated by the Federal Electricity Commission (Comisión Federal de Electricidad, CFE). Chiapas is rich in petroleum reserves. Oil production began during the 1980s and Chiapas has become the fourth largest producer of crude oil and natural gas among the Mexican states. Many reserves are yet untapped, but between 1984 and 1992, PEMEX drilled nineteen oil wells in the Lacandona Jungle. Currently, petroleum reserves are found in the municipalities of Juárez, Ostuacán, Pichucalco and Reforma in the north of the state with 116 wells accounting for about 6.5% of the country's oil production. It also provides about a quarter of the country's natural gas. This production equals of natural gas and 17,565,000 barrels of oil per year. Industry is limited to small and micro enterprises and include auto parts, bottling, fruit packing, coffee and chocolate processing, production of lime, bricks and other construction materials, sugar mills, furniture making, textiles, printing and the production of handcrafts. The two largest enterprises is the Comisión Federal de Electricidad and a Petróleos Mexicanos refinery. Chiapas opened its first assembly plant in 2002, a fact that highlights the historical lack of industry in this area. Handcrafts. Chiapas is one of the states that produces a wide variety of handcrafts and folk art in Mexico. One reason for this is its many indigenous ethnicities who produce traditional items out of identity as well as commercial reasons. One commercial reason is the market for crafts provided by the tourism industry. Another is that most indigenous communities can no longer provide for their own needs through agriculture. The need to generate outside income has led to many indigenous women producing crafts communally, which has not only had economic benefits but also involved them in the political process as well. Unlike many other states, Chiapas has a wide variety of wood resources such as cedar and mahogany as well as plant species such as reeds, ixtle and palm. It also has minerals such as obsidian, amber, jade and several types of clay and animals for the production of leather, dyes from various insects used to create the colors associated with the region. Items include various types of handcrafted clothing, dishes, jars, furniture, roof tiles, toys, musical instruments, tools and more. Chiapas's most important handcraft is textiles, most of which is cloth woven on a backstrap loom. Indigenous girls often learn how to sew and embroider before they learn how to speak Spanish. They are also taught how to make natural dyes from insects, and weaving techniques. Many of the items produced are still for day-to-day use, often dyed in bright colors with intricate embroidery. They include skirts, belts, rebozos, blouses, huipils and shoulder wraps called chals. Designs are in red, yellow, turquoise blue, purple, pink, green and various pastels and decorated with designs such as flowers, butterflies, and birds, all based on local flora and fauna. Commercially, indigenous textiles are most often found in San Cristóbal de las Casas, San Juan Chamula and Zinacantán. The best textiles are considered to be from Magdalenas, Larráinzar, Venustiano Carranza and Sibaca. One of the main minerals of the state is amber, much of which is 25 million years old, with quality comparable to that found in the Dominican Republic. Chiapan amber has a number of unique qualities, including much that is clear all the way through and some with fossilized insects and plants. Most Chiapan amber is worked into jewelry including pendants, rings and necklaces. Colors vary from white to yellow/orange to a deep red, but there are also green and pink tones as well. Since pre-Hispanic times, native peoples have believed amber to have healing and protective qualities. The largest amber mine is in Simojovel, a small village 130 km from Tuxtla Gutiérrez, which produces 95% of Chiapas's amber. Other mines are found in Huitiupán, Totolapa, El Bosque, Pueblo Nuevo Solistahuacán, Pantelhó and San Andrés Duraznal. According to the Museum of Amber in San Cristóbal, almost 300 kg of amber is extracted per month from the state. Prices vary depending on quality and color. The major center for ceramics in the state is the city of Amatenango del Valle, with its barro blanco (white clay) pottery. The most traditional ceramic in Amatenango and Aguacatenango is a type of large jar called a cantaro used to transport water and other liquids. Many pieces created from this clay are ornamental as well as traditional pieces for everyday use such as comals, dishes, storage containers and flowerpots. All pieces here are made by hand using techniques that go back centuries. Other communities that produce ceramics include Chiapa de Corzo, Tonalá, Ocuilpa, Suchiapa and San Cristóbal de las Casas. Wood crafts in the state center on furniture, brightly painted sculptures and toys. The Tzotzils of San Juan de Chamula are known for their sculptures as well as for their sturdy furniture. Sculptures are made from woods such as cedar, mahogany and strawberry tree. Another town noted for their sculptures is Tecpatán. The making lacquer to use in the decoration of wooden and other items goes back to the colonial period. The best-known area for this type of work, called "laca" is Chiapa de Corzo, which has a museum dedicated to it. One reason this type of decoration became popular in the state was that it protected items from the constant humidity of the climate. Much of the laca in Chiapa de Corzo is made in the traditional way with natural pigments and sands to cover gourds, dipping spoons, chests, niches and furniture. It is also used to create the Parachicos masks. Traditional Mexican toys, which have all but disappeared in the rest of Mexico, are still readily found here and include the cajita de la serpiente, yo yos, ball in cup and more. Other wooden items include masks, cooking utensils, and tools. One famous toy is the "muñecos zapatistas" (Zapatista dolls), which are based on the revolutionary group that emerged in the 1990s. Tourism and general commerce/services. Ninety-four percent of the state's commercial outlets are small retail stores with about 6% wholesalers. There are 111 municipal markets, 55 tianguis, three wholesale food markets and 173 large vendors of staple products. The service sector is the most important to the economy, with mostly commerce, warehousing and tourism. Tourism brings large numbers of visitors to the state each year. Most of Chiapas's tourism is based on its culture, colonial cities and ecology. The state has a total of 491 ranked hotels with 12,122 rooms. There are also 780 other establishments catering primarily to tourism, such as services and restaurants. There are three main tourist routes: the Maya Route, the Colonial Route and the Coffee Route. The Maya Route runs along the border with Guatemala in the Lacandon Jungle and includes the sites of Palenque, Bonampak, Yaxchilan along with the natural attractions of Agua Azul Waterfalls, Misol-Há Waterfall, and the Catazajá Lake. Palenque is the most important of these sites, and one of the most important tourist destinations in the state. Yaxchilan was a Mayan city along the Usumacinta River. It developed between 350 and 810 CE. Bonampak is known for its well preserved murals. These Mayan sites have made the state an attraction for international tourism. These sites contain a large number of structures, most of which date back thousands of years, especially to the sixth century. In addition to the sites on the Mayan Route, there are others within the state away from the border such as Toniná, near the city of Ocosingo. The Colonial Route is mostly in the central highlands with a significant number of churches, monasteries and other structures from the colonial period along with some from the 19th century and even into the early 20th. The most important city on this route is San Cristóbal de las Casas, located in the Los Altos region in the Jovel Valley. The historic center of the city is filled with tiled roofs, patios with flowers, balconies, Baroque facades along with Neoclassical and Moorish designs. It is centered on a main plaza surrounded by the cathedral, the municipal palace, the Portales commercial area and the San Nicolás church. In addition, it has museums dedicated to the state's indigenous cultures, one to amber and one to jade, both of which have been mined in the state. Other attractions along this route include Comitán de Domínguez and Chiapa de Corzo, along with small indigenous communities such as San Juan Chamula. The state capital of Tuxtla Gutiérrez does not have many colonial era structures left, but it lies near the area's most famous natural attraction of the Sumidero Canyon. This canyon is popular with tourists who take boat tours into it on the Grijalva River to see such features such as caves (La Cueva del Hombre, La Cueva del Silencio) and the Christmas Tree, which is a rock and plant formation on the side of one of the canyon walls created by a seasonal waterfall. The Coffee Route begins in Tapachula and follows a mountainous road into the Suconusco regopm. The route passes through Puerto Chiapas, a port with modern infrastructure for shipping exports and receiving international cruises. The route visits a number of coffee plantations, such as Hamburgo, Chiripa, Violetas, Santa Rita, Lindavista, Perú-París, San Antonio Chicarras and Rancho Alegre. These haciendas provide visitors with the opportunity to see how coffee is grown and initially processed on these farms. They also offer a number of ecotourism activities such as mountain climbing, rafting, rappelling and mountain biking. There are also tours into the jungle vegetation and the Tacaná Volcano. In addition to coffee, the region also produces most of Chiapas's soybeans, bananas and cacao. The state has a large number of ecological attractions most of which are connected to water. The main beaches on the coastline include Puerto Arista, Boca del Cielo, Playa Linda, Playa Aventuras, Playa Azul and Santa Brigida. Others are based on the state's lakes and rivers. Laguna Verde is a lake in the Coapilla municipality. The lake is generally green but its tones constantly change through the day depending on how the sun strikes it. In the early morning and evening hours there can also be blue and ochre tones as well. The El Chiflón Waterfall is part of an ecotourism center located in a valley with reeds, sugarcane, mountains and rainforest. It is formed by the San Vicente River and has pools of water at the bottom popular for swimming. The Las Nubes Ecotourism center is located in the Las Margaritas municipality near the Guatemalan border. The area features a number of turquoise blue waterfalls with bridges and lookout points set up to see them up close. Still others are based on conservation, local culture and other features. The Las Guacamayas Ecotourism Center is located in the Lacandon Jungle on the edge of the Montes Azules reserve. It is centered on the conservation of the red macaw, which is in danger of extinction. The Tziscao Ecotourism Center is centered on a lake with various tones. It is located inside the Lagunas de Montebello National Park, with kayaking, mountain biking and archery. Lacanjá Chansayab is located in the interior of the Lacandon Jungle and a major Lacandon people community. It has some activities associated with ecotourism such as mountain biking, hiking and cabins. The Grutas de Rancho Nuevo Ecotourism Center is centered on a set of caves in which appear capricious forms of stalagmite and stalactites. There is horseback riding as well. Culture. Architecture. Architecture in the state begins with the archeological sites of the Mayans and other groups who established color schemes and other details that echo in later structures. After the Spanish subdued the area, the building of Spanish style cities began, especially in the highland areas. Many of the colonial-era buildings are related to Dominicans who came from Seville. This Spanish city had much Arabic influence in its architecture, and this was incorporated into the colonial architecture of Chiapas, especially in structures dating from the 16th to 18th centuries. However, there are a number of architectural styles and influences present in Chiapas colonial structures, including colors and patterns from Oaxaca and Central America along with indigenous ones from Chiapas. The main colonial structures are the cathedral and Santo Domingo church of San Cristóbal, the Santo Domingo monastery and La Pila in Chiapa de Corzo. The San Cristóbal cathedral has a Baroque facade that was begun in the 16th century but by the time it was finished in the 17th, it had a mix of Spanish, Arabic, and indigenous influences. It is one of the most elaborately decorated in Mexico. The churches and former monasteries of Santo Domingo, La Merced and San Francisco have ornamentation similar to that of the cathedral. The main structures in Chiapa de Corzo are the Santo Domingo monastery and the La Pila fountain. Santo Domingo has indigenous decorative details such as double headed eagles as well as a statue of the founding monk. In San Cristóbal, the Diego de Mazariegos house has a Plateresque facade, while that of Francisco de Montejo, built later in the 18th century has a mix of Baroque and Neoclassical. Art Deco structures can be found in San Cristóbal and Tapachula in public buildings as well as a number of rural coffee plantations from the Porfirio Díaz era. Art and literature. Art in Chiapas is based on the use of color and has strong indigenous influence. This dates back to cave paintings such as those found in Sima de las Cotorras near Tuxtla Gutiérrez and the caverns of Rancho Nuevo where human remains and offerings were also found. The best-known pre-Hispanic artwork is the Maya murals of Bonampak, which are the only Mesoamerican murals to have been preserved for over 1500 years. In general, Mayan artwork stands out for its precise depiction of faces and its narrative form. Indigenous forms derive from this background and continue into the colonial period with the use of indigenous color schemes in churches and modern structures such as the municipal palace in Tapachula. Since the colonial period, the state has produced a large number of painters and sculptors. Noted 20th-century artists include Lázaro Gómez, Ramiro Jiménez Chacón, Héctor Ventura Cruz, Máximo Prado Pozo, and Gabriel Gallegos Ramos. The two best-known poets from the state are Jaime Sabines and Rosario Castellanos, both from prominent Chiapan families. The first was a merchant and diplomat and the second was a teacher, diplomat, theatre director and the director of the Instituto Nacional Indigenista. Jaime Sabines is widely regarded as Mexico's most influential contemporary poet. His work celebrates everyday people in common settings. Music. The most important instrument in the state is the marimba. In the pre-Hispanic period, indigenous peoples had already been producing music with wooden instruments. The marimba was introduced by African slaves brought to Chiapas by the Spanish. However, it achieved its widespread popularity in the early 20th century due to the formation of the Cuarteto Marimbistico de los Hermanos Gómez in 1918, who popularized the instrument and the popular music that it plays not only in Chiapas but in various parts of Mexico and into the United States. Along with Cuban Juan Arozamena, they composed the piece "Las chiapanecas" considered to be the unofficial anthem of the state. In the 1940s, they were also featured in a number of Mexican films. Marimbas are constructed in Venustiano Carranza, Chiapas de Corzo and Tuxtla Gutiérrez. Cuisine. Like the rest of Mesoamerica, the basic diet has been based on corn and Chiapas cooking retains strong indigenous influence. One important ingredient is chipilin, a fragrant and strongly flavored herb that is used on most of the indigenous plates and hoja santa, the large anise-scented leaves used in much of southern Mexican cuisine. Chiapan dishes do not incorporate many chili peppers as part of their dishes. Rather, chili peppers are most often found in the condiments. One reason for that is that a local chili pepper, called the simojovel, is far too hot to use except very sparingly. Chiapan cuisine tends to rely more on slightly sweet seasonings in their main dishes such as cinnamon, plantains, prunes and pineapple are often found in meat and poultry dishes. Tamales are a major part of the diet and often include chipilín mixed into the dough and hoja santa, within the tamale itself or used to wrap it. One tamale native to the state is the "picte", a fresh sweet corn tamale. Tamales juacanes are filled with a mixture of black beans, dried shrimp, and pumpkin seeds. Meats are centered on the European introduced beef, pork and chicken as many native game animals are in danger of extinction. Meat dishes are frequently accompanied by vegetables such as squash, chayote and carrots. Black beans are the favored type. Beef is favored, especially a thin cut called tasajo usually served in a sauce. Pepita con tasajo is a common dish at festivals especially in Chiapa de Corzo. It consists of a squash seed based sauced over reconstituted and shredded dried beef. As a cattle raising area, beef dishes in Palenque are particularly good. Pux-Xaxé is a stew with beef organ meats and mole sauce made with tomato, chili bolita and corn flour. Tzispolá is a beef broth with chunks of meat, chickpeas, cabbage and various types of chili peppers. Pork dishes include cochito, which is pork in an adobo sauce. In Chiapa de Corzo, their version is cochito horneado, which is a roast suckling pig flavored with adobo. Seafood is a strong component in many dishes along the coast. Turula is dried shrimp with tomatoes. Sausages, ham and other cold cuts are most often made and consumed in the highlands. In addition to meat dishes, there is chirmol, a cooked tomato sauced flavored with chili pepper, onion and cilantro and zats, butterfly caterpillars from the Altos de Chiapas that are boiled in salted water, then sautéed in lard and eaten with tortillas, limes, and green chili pepper. Sopa de pan consists of layers of bread and vegetables covered with a broth seasoned with saffron and other flavorings. A Comitán speciality is hearts of palm salad in vinaigrette and Palenque is known for many versions of fried plaintains, including filled with black beans or cheese. Cheese making is important, especially in the municipalities of Ocosingo, Rayon and Pijijiapan. Ocosingo has its own self-named variety, which is shipped to restaurants and gourmet shops in various parts of the country. Regional sweets include crystallized fruit, coconut candies, flan and compotes. San Cristobal is noted for its sweets, as well as chocolates, coffee and baked goods. While Chiapas is known for good coffee, there are a number of other local beverages. The oldest is pozol, originally the name for a fermented corn dough. This dough has its origins in the pre-Hispanic period. To make the beverage, the dough is dissolved in water and usually flavored with cocoa and sugar, but sometimes it is left to ferment further. It is then served very cold with much ice. Taxcalate is a drink made from a powder of toasted corn, achiote, cinnamon and sugar prepared with milk or water. Pumbo is a beverage made with pineapple, club soda, vodka, sugar syrup and much ice. Pox is a drink distilled from sugar cane. Religion. Like in the rest of Mexico, Christianity was introduced to the native populations of Chiapas by the Spanish conquistadors. However, Catholic beliefs were mixed with indigenous ones to form what is now called "traditionalist" Catholic belief. The Diocese of Chiapas comprises almost the entire state, and centered on San Cristobal de las Casas. It was founded in 1538 by Pope Paul III to evangelize the area with its most famous bishop of that time Bartolomé de las Casas. Evangelization focused on grouping indigenous peoples into communities centered on a church. This bishop not only graciously evangelized the people in their own language, he worked to introduce many of the crafts still practiced today. While still a majority, only 53.9% percent of Chiapas residents profess the Catholic faith as of 2020, compared to 78.6% of the total national population. Some indigenous people mix Christianity with Indian beliefs. One particular area where this is strong is the central highlands in small communities such as San Juan Chamula. In one church in San Cristobal, Mayan rites including the sacrifice of animals are permitted inside the church to ask for good health or to "ward off the evil eye." Starting in the 1970s, there has been a shift away from traditional Catholic affiliation to Protestant, Evangelical and other Christian denominations. Presbyterians and Pentecostals attracted a large number of converts, with percentages of Protestants in the state rising from five percent in 1970 to twenty-one percent in 2000. This shift has had a political component as well, with those making the switch tending to identify across ethnic boundaries, especially across indigenous ethnic boundaries and being against the traditional power structure. The National Presbyterian Church in Mexico is particularly strong in Chiapas, the state can be described as one of the strongholds of the denomination. Both Protestants and Catholics tend to oppose traditional cacique leadership and often worked to prohibit the sale of alcohol. The latter had the effect of attracting many women to both movements. The growing number of Protestants, Evangelicals and Word of God Catholics challenging traditional authority has caused religious strife in a number of indigenous communities. Tensions have been strong, at times, especially in rural areas such as San Juan Chamula. Tension among the groups reached its peak in the 1990s with a large number of people injured during open clashes. In the 1970s, caciques began to expel dissidents from their communities for challenging their power, initially with the use of violence. By 2000, more than 20,000 people had been displaced, but state and federal authorities did not act to stop the expulsions. Today, the situation has quieted but the tension remains, especially in very isolated communities. Islam. The Spanish Murabitun community, the "Comunidad Islámica en España", based in Granada in Spain, and one of its missionaries, Muhammad Nafia (formerly Aureliano Pérez), now emir of the Comunidad Islámica en México, arrived in the state of Chiapas shortly after the Zapatista uprising and established a commune in the city of San Cristóbal. The group, characterized as anti-capitalistic, entered an ideological pact with the socialist Zapatistas group. President Vicente Fox voiced concerns about the influence of the fundamentalism and possible connections to the Zapatistas and the Basque terrorist organization (ETA), but it appeared that converts had no interest in political extremism. By 2015, many indigenous Mayans and more than 700 Tzotzils have converted to Islam. In San Cristóbal, the Murabitun established a pizzeria, a carpentry workshop and a Quranic school (madrasa) where children learned Arabic and prayed five times a day in the backroom of a residential building, and women in head scarves have become a common sight. Nowadays, most of the Mayan Muslims have left the Murabitun and established ties with the CCIM, now following the orthodox Sunni school of Islam. They built the Al-Kausar Mosque in San Cristobal de las Casas. Archaeology. The earliest population of Chiapas was in the coastal Soconusco region, where the Chantuto peoples appeared, going back to 5500 BC. This was the oldest Mesoamerican culture discovered to date. The largest and best-known archaeological sites in Chiapas belong to the Mayan civilization. Apart from a few works by Franciscan friars, knowledge of Maya civilisation largely disappeared after the Spanish Conquest. In the mid-19th century, John Lloyd Stephens and Frederick Catherwood traveled though the sites in Chiapas and other Mayan areas and published their writings and illustrations. This led to serious work on the culture including the deciphering of its hieroglyphic writing. In Chiapas, principal Mayan sites include Palenque, Toniná, Bonampak, Lacanja, Sak Tz'i, Chinkultic and Tenam Puente, all or near in the Lacandon Jungle. They are technically more advanced than earlier Olmec sites, which can best be seen in the detailed sculpting and novel construction techniques, including structures of four stories in height. Mayan sites are not only noted for large numbers of structures, but also for glyphs, other inscriptions, and artwork that has provided a relatively complete history of many of the sites. Palenque is the most important Mayan and archaeological site. Though much smaller than the huge sites at Tikal or Copán, Palenque contains some of the finest architecture, sculpture and stucco reliefs the Mayans ever produced. The history of the Palenque site begins in 431 with its height under Pakal I (615–683), Chan-Bahlum II (684–702) and Kan-Xul who reigned between 702 and 721. However, the power of Palenque would be lost by the end of the century. Pakal's tomb was not discovered inside the Temple of Inscriptions until 1949. Today, Palenque is a World Heritage Site and one of the best-known sites in Mexico. The similarly aged site (750/700–600) of Pampa el Pajón preserves burials and cultural items, including cranial modifications. Yaxchilan flourished in the 8th and 9th centuries. The site contains impressive ruins, with palaces and temples bordering a large plaza upon a terrace above the Usumacinta River. The architectural remains extend across the higher terraces and the hills to the south of the river, overlooking both the river itself and the lowlands beyond. Yaxchilan is known for the large quantity of excellent sculpture at the site, such as the monolithic carved stelae and the narrative stone reliefs carved on lintels spanning the temple doorways. Over 120 inscriptions have been identified on the various monuments from the site. The major groups are the Central Acropolis, the West Acropolis and the South Acropolis. The South Acropolis occupies the highest part of the site. The site is aligned with relation to the Usumacinta River, at times causing unconventional orientation of the major structures, such as the two ballcourts. The city of Bonampak features some of the finest remaining Maya murals. The realistically rendered paintings depict human sacrifices, musicians and scenes of the royal court. In fact the name means "painted murals." It is centered on a large plaza and has a stairway that leads to the Acropolis. There are also a number of notable steles. Toniná is near the city of Ocosingo with its main features being the Casa de Piedra (House of Stone) and Acropolis. The latter is a series of seven platforms with various temples and steles. This site was a ceremonial center that flourished between 600 and 900 CE. The capital of Sak Tz’i’ (an Ancient Maya kingdom) now named Lacanja Tzeltal, was revealed by researchers led by associate anthropology professor Charles Golden and bioarchaeologist Andrew Scherer in the Chiapas in the backyard of a Mexican farmer in 2020. Multiple domestic constructions used by the population for religious purposes. “Plaza Muk’ul Ton” or Monuments Plaza where people used to gather for ceremonies was also unearthed by the team. Pre-Mayan cultures. While the Mayan sites are the best-known, there are a number of other important sites in the state, including many older than the Maya civilization. The oldest sites are in the coastal Soconusco region. This includes the Mokaya culture, the oldest ceramic culture of Mesoamerica. Later, Paso de la Amada became important, in this site is built the oldest Mesoamerican ballcourt. Many of these sites are in Mazatan, Chiapas area. Izapa became an important pre-Mayan site as well. There are also other ancient sites including Tapachula and Tecpatán, and Pijijiapan. These sites contain numerous embankments and foundations that once lay beneath pyramids and other buildings. Some of these buildings have disappeared and others have been covered by jungle for about 3,000 years, unexplored. Pijijiapan and Izapa are on the Pacific coast and were the most important pre Hispanic cities for about 1,000 years, as the most important commercial centers between the Mexican Plateau and Central America. Sima de las Cotorras is a sinkhole 140 meters deep with a diameter of 160 meters in the municipality of Ocozocoautla. It contains ancient cave paintings depicting warriors, animals and more. It is best known as a breeding area for parrots, thousands of which leave the area at once at dawn and return at dusk. The state as its Museo Regional de Antropologia e Historia located in Tuxtla Gutiérrez focusing on the pre Hispanic peoples of the state with a room dedicated to its history from the colonial period. Education. The average number of years of schooling is 6.7, which is the beginning of middle school, compared to the Mexico average of 8.6. 16.5% have no schooling at all, 59.6% have only primary school/secondary school, 13.7% finish high school or technical school and 9.8% go to university. Eighteen out of every 100 people 15 years or older cannot read or write, compared to 7/100 nationally. Most of Chiapas's illiterate population are indigenous women, who are often prevented from going to school. School absenteeism and dropout rates are highest among indigenous girls. There are an estimated 1.4 million students in the state from preschool on up. The state has about 61,000 teachers and just over 17,000 centers of educations. Preschool and primary schools are divided into modalities called general, indigenous, private and community educations sponsored by CONAFE. Middle school is divided into technical, telesecundaria (distance education) and classes for working adults. About 98% of the student population of the state is in state schools. Higher levels of education include "professional medio" (vocational training), general high school and technology-focused high school. At this level, 89% of students are in public schools. There are 105 universities and similar institutions with 58 public and 47 private serving over 60,500 students. The state university is the (UNACH). It was begun when an organization to establish a state level institution was formed in 1965, with the university itself opening its doors ten years later in 1975. The university project was partially supported by UNESCO in Mexico. It integrated older schools such as the Escuela de Derecho (Law School), which originated in 1679; the Escuela de Ingeniería Civil (School of Civil Engineering), founded in 1966; and the Escuela de Comercio y Administración, which was located in Tuxtla Gutiérrez. Infrastructure. Transport. The state has approximately of highway with 10,857 federally maintained and 11,660 maintained by the state. Almost all of these kilometers are paved. Major highways include the Las Choapas-Raudales-Ocozocoautla, which links the state to Oaxaca, Veracruz, Puebla and Mexico City. Major airports include Llano San Juan in Ocozocoautla, Francisco Sarabia National Airport (which was replaced by Ángel Albino Corzo International Airport) in Tuxtla Gutiérrez and Corazón de María Airport (which closed in 2010) in San Cristóbal de las Casas. These are used for domestic flights with the airports in Palenque and Tapachula providing international service into Guatemala. There are 22 other airfields in twelve other municipalities. Rail lines extend over 547.8 km. There are two major lines: one in the north of the state that links the center and southeast of the country, and the Costa Panamericana route, which runs from Oaxaca to the Guatemalan border. Chiapas's main port is just outside the city of Tapachula called the Puerto Chiapas. It faces of ocean, with of warehouse space. Next to it there is an industrial park that covers . Puerto Chiapas has of area with a capacity to receive 1,800 containers as well as refrigerated containers. The port serves the state of Chiapas and northern Guatemala. Puerto Chiapas serves to import and export products across the Pacific to Asia, the United States, Canada and South America. It also has connections with the Panama Canal. A marina serves yachts in transit. There is an international airport located away as well as a railroad terminal ending at the port proper. 2010, the port gained a terminal for cruise ships with tours to the Izapa site, the Coffee Route, the city of Tapachula, Pozuelos Lake and an Artesanal Chocolate Tour. Principal exports through the port include banana and banana trees, corn, fertilizer and tuna. Media. There are thirty-six AM radio stations and sixteen FM stations. There are thirty-seven local television stations and sixty-six repeaters. Newspapers of Chiapas include: "Chiapas Hoy", "Cuarto Poder ", "El Heraldo de Chiapas", "El Orbe", "La Voz del Sureste", and "Noticias de Chiapas."
6788
4904587
https://en.wikipedia.org/wiki?curid=6788
Chrysler Building
The Chrysler Building is a , Art Deco skyscraper in the East Midtown neighborhood of Manhattan, New York City, United States. Located at the intersection of 42nd Street and Lexington Avenue, it is the tallest brick building in the world with a steel framework. It was both the world's first supertall skyscraper and the world's tallest building for 11 months after its completion in 1930. , the Chrysler is the 12th-tallest building in the city, tied with The New York Times Building. Originally a project of real estate developer and former New York State Senator William H. Reynolds, the building was commissioned by Walter Chrysler, the head of the Chrysler Corporation. The construction of the Chrysler Building, an early skyscraper, was characterized by a competition with 40 Wall Street and the Empire State Building to become the world's tallest building. The Chrysler Building was designed and funded by Walter Chrysler personally as a real estate investment for his children, but it was not intended as the Chrysler Corporation's headquarters (which was located in Detroit at the Highland Park Chrysler Plant from 1934 to 1996). An annex was completed in 1952, and the building was sold by the Chrysler family the next year, with numerous subsequent owners. When the Chrysler Building opened, there were mixed reviews of the building's design, some calling it inane and unoriginal, others hailing it as modernist and iconic. Reviewers in the late 20th and early 21st centuries regarded the building as a paragon of the Art Deco architectural style. In 2007, it was ranked ninth on the American Institute of Architects' list of America's Favorite Architecture. The facade and interior became New York City designated landmarks in 1978, and the structure was added to the National Register of Historic Places as a National Historic Landmark in 1976. Site. The Chrysler Building is on the eastern side of Lexington Avenue between 42nd and 43rd streets in Midtown Manhattan, New York City, United States. The land was donated to The Cooper Union for the Advancement of Science and Art in 1902. The site is roughly a trapezoid with a frontage on Lexington Avenue; a frontage on 42nd Street; and a frontage on 43rd Street. The site bordered the old Boston Post Road, which predated, and ran aslant of, the Manhattan street grid established by the Commissioners' Plan of 1811. As a result, the east side of the building's base is similarly aslant. The building is assigned its own ZIP Code, 10174. It is one of 41 buildings in Manhattan that have their own ZIP Codes, . The Grand Hyatt New York hotel and the Graybar Building are across Lexington Avenue, while the Socony–Mobil Building is across 42nd Street. In addition, the Chanin Building is to the southwest, diagonally across Lexington Avenue and 42nd Street. Architecture. The Chrysler Building was designed by William Van Alen in the Art Deco style and is named after one of its original tenants, automotive executive Walter Chrysler. With a height of , the Chrysler is the 12th-tallest building in the city , tied with The New York Times Building. The building is constructed of a steel frame infilled with masonry, with areas of decorative metal cladding. The structure contains 3,862 exterior windows. Approximately fifty metal ornaments protrude at the building's corners on five floors reminiscent of gargoyles on Gothic cathedrals. The 31st-floor contains gargoyles as well as replicas of the 1929 Chrysler radiator caps, and the 61st-floor is adorned with eagles as a nod to America's national bird. The design of the Chrysler Building makes extensive use of bright "Nirosta" stainless steel, an austenitic alloy developed in Germany by Krupp. It was the first use of this "18–8 stainless steel" in an American project, composed of 18% chromium and 8% nickel. Nirosta was used in the exterior ornaments, the window frames, the crown, and the needle. The steel was an integral part of Van Alen's design, as E.E. Thum explains: "The use of permanently bright metal was of greatest aid in the carrying of rising lines and the diminishing circular forms in the roof treatment, so as to accentuate the gradual upward swing until it literally dissolves into the sky..." Stainless steel producers used the Chrysler Building to evaluate the durability of the product in architecture. In 1929, the American Society for Testing Materials created an inspection committee to study its performance, which regarded the Chrysler Building as the best location to do so; a subcommittee examined the building's panels every five years until 1960, when the inspections were canceled because the panels had shown minimal deterioration. Form. The Chrysler Building's height and legally mandated setbacks influenced Van Alen in his design. The walls of the lowermost sixteen floors rise directly from the sidewalk property lines, except for a recess on one side that gives the building a U-shaped floor plan above the fourth floor. There are setbacks on floors 16, 18, 23, 28, and 31, making the building compliant with the 1916 Zoning Resolution. This gives the building the appearance of a ziggurat on one side and a U-shaped palazzo on the other. Above the 31st floor, there are no more setbacks until the 60th floor, above which the structure is funneled into a Maltese cross shape that "blends the square shaft to the finial", according to author and photographer Cervin Robinson. The floor plans of the first sixteen floors were made as large as possible to optimize the amount of rental space nearest ground level, which was seen as most desirable. The U-shaped cut above the fourth floor served as a shaft for air flow and illumination. The area between floors 28 and 31 added "visual interest to the middle of the building, preventing it from being dominated by the heavy detail of the lower floors and the eye-catching design of the finial. They provide a base to the column of the tower, effecting a transition between the blocky lower stories and the lofty shaft." Facade. Base and shaft. The ground floor exterior is covered in polished black granite from Shastone, while the three floors above it are clad in white marble from Georgia. There are two main entrances, on Lexington Avenue and on 42nd Street, each three floors high with Shastone granite surrounding each proscenium-shaped entryway. At some distance into each main entryway, there are revolving doors "beneath intricately patterned metal and glass screens", designed so as to embody the Art Deco tenet of amplifying the entrance's visual impact. A smaller side entrance on 43rd Street is one story high. There are storefronts consisting of large Nirosta-steel-framed windows at ground level. Office windows penetrate the second through fourth floors. The west and east elevations contain the air shafts above the fourth floor, while the north and south sides contain the receding setbacks. Below the 16th floor, the facade is clad with white brick, interrupted by white-marble bands in a manner similar to basket weaving. The inner faces of the brick walls are coated with a waterproof grout mixture measuring about thick. The windows, arranged in grids, do not have window sills, the frames being flush with the facade. Between the 16th and 24th floors, the exterior exhibits vertical white brick columns that are separated by windows on each floor. This visual effect is made possible by the presence of aluminum spandrels between the columns of windows on each floor. There are abstract reliefs on the 20th through 22nd-floor spandrels, while the 24th floor contains decorative pineapples. Above the third setback, consisting of the 24th through 27th floors, the facade contains horizontal bands and zigzagged gray-and-black brick motifs. The section above the fourth setback, between the 27th and 31st floors, serves as a podium for the main shaft of the building. There are Nirosta-steel decorations above the setbacks. At each corner of the 31st floor, large car-hood ornaments were installed to make the base look larger. These corner extensions help counter a common optical illusion seen in tall buildings with horizontal bands, whose taller floors would normally look larger. The 31st floor also contains a gray and white frieze of hubcaps and fenders, which both symbolize the Chrysler Corporation and serves as a visual signature of the building's Art Deco design. The bonnet embellishments take the shape of Mercury's winged helmet and resemble hood ornaments installed on Chrysler vehicles at the time. The shaft of the tower was designed to emphasize both the horizontal and vertical: each of the tower's four sides contains three columns of windows, each framed by bricks and an unbroken marble pillar that rises along the entirety of each side. The spandrels separating the windows contain "alternating vertical stripes in gray and white brick", while each corner contains horizontal rows of black brick. Crown and spire. The Chrysler Building is renowned for, and recognized by its terraced crown, which is an extension of the main tower. Composed of seven radiating terraced arches, Van Alen's design of the crown is a cruciform groin vault of seven concentric members with transitioning setbacks. The entire crown is clad with Nirosta steel, ribbed and riveted in a radiating sunburst pattern with many triangular vaulted windows, reminiscent of the spokes of a wheel. The windows are repeated, in smaller form, on the terraced crown's seven narrow setbacks. Due to the curved shape of the dome, the Nirosta sheets had to be measured on site, so most of the work was carried out in workshops on the building's 67th and 75th floors. According to Robinson, the terraced crown "continue[s] the wedding-cake layering of the building itself. This concept is carried forward from the 61st floor, whose eagle gargoyles echo the treatment of the 31st, to the spire, which extends the concept of 'higher and narrower' forward to infinite height and infinitesimal width. This unique treatment emphasizes the building's height, giving it an other worldly atmosphere reminiscent of the fantastic architecture of Coney Island or the Far East." Television station WCBS-TV (Channel 2) originated its transmission from the top of the Chrysler Building in 1938. WCBS-TV transmissions were shifted to the Empire State Building in 1960 in response to competition from RCA's transmitter on that building. For many years WPAT-FM and WTFM (now WKTU) also transmitted from the Chrysler Building, but their move to the Empire State Building by the 1970s ended commercial broadcasting from the structure. The crown and spire are illuminated by a combination of fluorescent lights framing the crown's distinctive triangular windows and colored floodlights that face toward the building, allowing it to be lit in a variety of schemes for special occasions. The V-shaped fluorescent "tube lighting" – hundreds of 480V 40W bulbs framing 120 window openings – was added in 1981, although it had been part of the original design. Until 1998, the lights were turned off at 2 am, but "The New York Observer" columnist Ron Rosenbaum convinced Tishman Speyer to keep the lights on until 6 am. Since 2015, the Chrysler Building and other city skyscrapers have been part of the Audubon Society's Lights Out program, turning off their lights during bird migration seasons. Interior. The interior of the building has several elements that were innovative when the structure was constructed. The partitions between the offices are soundproofed and divided into interchangeable sections, so the layout of any could be changed quickly and comfortably. Pipes under the floors carry both telephone and electricity cables. The topmost stories are the smallest in the building and have about each. Lobby. The lobby is triangular in plan, connecting with entrances on Lexington Avenue, 42nd Street, and 43rd Street. The lobby was the only publicly accessible part of the Chrysler Building by the 2000s. The three entrances contain Nirosta steel doors, above which are etched-glass panels that allow natural light to illuminate the space. The floors contain bands of yellow travertine from Siena, which mark the path between the entrances and elevator banks. The writer Eric Nash described the lobby as a paragon of the Art Deco style, with clear influences of German Expressionism. Chrysler wanted the design to impress other architects and automobile magnates, so he imported various materials regardless of the extra costs incurred. The walls are covered with huge slabs of African red granite. The walls also contain storefronts and doors made of Nirosta steel. There is a wall panel dedicated to the work of clinchers, surveyors, masons, carpenters, plasterers, and builders. Fifty different figures were modeled after workers who participated in its construction. In 1999, the mural was returned to its original state after a restoration that removed the polyurethane coating and filled-in holes added in the 1970s. Originally, Van Alen's plans for the lobby included four large supporting columns, but they were removed after Chrysler objected on the grounds that the columns made the lobby appear "cramped". The lobby has dim lighting which combined with the appliqués of the lamps, create an intimate atmosphere and highlight the space. Vertical bars of fluorescent light are covered with Belgian blue marble and Mexican amber onyx bands, which soften and diffuse the light. The marble and onyx bands are designed as inverted chevrons. Opposite the Lexington Avenue entrance is a security guard's desk topped by a digital clock. The panel behind the desk is made of marble, surrounded by Nirosta steel. The lobby connects to four elevator banks, each of a different design. To the north and south of the security desk are terrazzo staircases leading to the second floor and basement. The stairs contain marble walls and Nirosta-steel railings. The outer walls are flat but are clad with marble strips that are slightly angled to each other, which give the impression of being curved. The inner railings of each stair are designed with zigzagging Art Deco motifs, ending at red-marble newel posts on the ground story. Above each stair are aluminum-leaf ceilings with etched-glass chandeliers. The ceiling contains a mural, "Transport and Human Endeavor", designed by Edward Trumbull. The mural's theme is "energy and man's application of it to the solution of his problems", and it pays homage to the Golden Age of Aviation and the Machine Age. The mural is painted in the shape of a "Y" with ocher and golden tones. The central image of the mural is a "muscled giant whose brain directs his boundless energy to the attainment of the triumphs of this mechanical era", according to a 1930 pamphlet that advertised the building. The mural's Art Deco style is manifested in characteristic triangles, sharp angles, slightly curved lines, chrome ornaments, and numerous patterns. The mural depicts several silver planes, including the "Spirit of St. Louis", as well as furnaces of incandescent steel and the building itself. When the building opened, the first and second floors housed a public exhibition of Chrysler vehicles. The exhibition, known as the Chrysler Automobile Salon, was near the corner of Lexington Avenue and 42nd Streets, and opened in 1936. The ground floor featured "invisible glass" display windows, a diameter turntable upon which automobiles were displayed, and a ceiling with lights arranged in concentric circles. Escalators led to the showroom's second floor where Plymouths, Dodges, and DeSotos were sold. The Chrysler Salon remained operational through at least the 1960s. Elevators. There are 32 elevators in the skyscraper, clustered into four banks. At the time of opening, 28 of the elevators were for passenger use. Each bank serves different floors within the building, with several "express" elevators going from the lobby to a few landings in between, while "local" elevators connect the landings with the floors above these intermediate landings. As per Walter Chrysler's wishes, the elevators were designed to run at a rate of , despite the speed restriction enforced in all city elevators at the time. This restriction was loosened soon after the Empire State Building opened in 1931, as that building had also been equipped with high-speed elevators. The Chrysler Building also had three of the longest elevator shafts in the world at the time of completion. Over the course of a year, Van Alen painstakingly designed these elevators with the assistance of L.T.M. Ralston, who was in charge of developing the elevator cabs' mechanical parts. The cabs were manufactured by the Otis Elevator Company, while the doors were made by the Tyler Company. The dimensions of each elevator were deep by wide. Within the lobby, there are ziggurat-shaped Mexican onyx panels above the elevator doors. The doors are designed in a lotus pattern and are clad with steel and wood. When the doors are closed, they resemble "tall fans set off by metallic palm fronds rising through a series of silver parabolas, whose edges were set off by curved lilies" from the outside, as noted by Curcio. However, when a set of doors is open, the cab behind the doors resembles "an exquisite Art Deco room". These elements were influenced by ancient Egyptian designs, which significantly impacted the Art Deco style. According to Vincent Curcio, "these elevator interiors were perhaps the single most beautiful and, next to the dome, the most important feature of the entire building." Even though the woods in the elevator cabs were arranged in four basic patterns, each cab had a unique combination of woods. Curcio stated that "if anything the building is based on patterned fabrics, [the elevators] certainly are. Three of the designs could be characterized as having 'geometric', 'Mexican' and vaguely 'art nouveau' motifs, which reflect the various influences on the design of the entire building." The roof of each elevator was covered with a metal plate whose design was unique to that cab, which in turn was placed on a polished wooden pattern that was also customized to the cab. Hidden behind these plates were ceiling fans. Curcio wrote that these elevators "are among the most beautiful small enclosed spaces in New York, and it is fair to say that no one who has seen or been in them has forgotten them". Curcio compared the elevators to the curtains of a Ziegfeld production, noting that each lobby contains lighting that peaks in the middle and slopes down on either side. The decoration of the cabs' interiors was also a nod to the Chrysler Corporation's vehicles: cars built during the building's early years had dashboards with wooden moldings. Both the doors and cab interiors were considered to be works of extraordinary marquetry. Basement. On the 42nd Street side of the Chrysler Building, a staircase from the street leads directly under the building to the New York City Subway's at Grand Central–42nd Street station. It is part of the structure's original design. The Interborough Rapid Transit Company, which at the time was the operator of all the routes serving the 42nd Street station, originally sued to block construction of the new entrance because it would cause crowding, but the New York City Board of Transportation pushed to allow the corridor anyway. Chrysler eventually built and paid for the building's subway entrance. Work on the new entrance started in March 1930 and it opened along with the Chrysler Building two months later. The basement also had a "hydrozone water bottling unit" that would filter tap water into drinkable water for the building's tenants. The drinkable water would then be bottled and shipped to higher floors. Upper stories. Cloud Club. The private Cloud Club formerly occupied the 66th through 68th floors. It opened in July 1930 with some three hundred members, all wealthy males who formed the city's elite. Its creation was spurred by Texaco's wish for a proper restaurant for its executives prior to renting fourteen floors in the building. The Cloud Club was a compromise between William Van Alen's modern style and Walter Chrysler's stately and traditional tastes. A member had to be elected and, if accepted, paid an initial fee of $200, plus a $150 to $300 annual fee. Texaco executives comprised most of the Cloud Club's membership. The club and its dining room may have inspired the Rockefeller Center Luncheon Club at the Rainbow Room in 30 Rockefeller Plaza. There was a Tudor-style foyer on the 66th floor with oak paneling, as well as an old English-style grill room with wooden floors, wooden beams, wrought-iron chandeliers, and glass and lead doors. The main dining room had a futuristic appearance, with polished granite columns and etched glass appliqués in Art Deco style. There was a mural of a cloud on the ceiling and a mural of Manhattan on the dining room's north side. The 66th and 67th floors were connected by a Renaissance-style marble and bronze staircase. The 67th floor had an open bar with dark-wood paneling and furniture. On the same floor, Walter Chrysler and Texaco both had private dining rooms. Chrysler's dining room had a black and frosted-blue glass frieze of automobile workers. Texaco's dining room contained a mural across two walls; one wall depicted a town in New England with a Texaco gas station, while the other depicted an oil refinery and Texaco truck. The south side of the 67th floor also contained a library with wood-paneled walls and fluted pilasters. The 68th floor mainly contained service spaces. In the 1950s and 1960s, members left the Cloud Club for other clubs. Texaco moved to Westchester County in 1977, and the club closed two years later. Although there have been several projects to rehabilitate the club or transform it into a disco or a gastronomic club, these plans have never materialized, as then-owner Cooke reportedly did not want a "conventional" restaurant operating within the old club. Tishman Speyer rented the top two floors of the old Cloud Club. The old staircase has been removed, as have many of the original decorations, which prompted objections from the Art Deco Society of New York. Private Chrysler offices. Originally, Walter Chrysler had a two-story apartment on the 69th and 70th floors with a fireplace and a private office. The office also contained a gymnasium and the loftiest bathrooms in the city. The office had a medieval ambience with leaded windows, elaborate wooden doors, and heavy plaster. Chrysler did not use his gym much, instead choosing to stay at the Chrysler Corporation's headquarters in Detroit. Subsequently, the 69th and 70th floors were converted into a dental clinic. In 2005, a report by "The New York Times" found that one of the dentists, Charles Weiss, had operated at the clinic's current rooftop location since 1969. The office still had the suite's original bathroom and gymnasium. Chrysler also had a unit on the 58th through 60th floors, which served as his residence. Observation deck and attic. From the building's opening until 1945, it contained a observation deck on the 71st floor, called "Celestial". For fifty cents visitors could transit its circumference through a corridor with vaulted ceilings painted with celestial motifs and bedecked with small hanging glass planets. The center of the observatory contained the toolbox that Walter P. Chrysler used at the beginning of his career as a mechanic; it was later preserved at the Chrysler Technology Center in Auburn Hills, Michigan. An image of the building resembling a rocket hung above it. According to a contemporary brochure, views of up to were possible on a clear day; but the small triangular windows of the observatory created strange angles that made viewing difficult, depressing traffic. When the Empire State Building opened in 1931 with two observatories at a higher elevation, the Chrysler observatory lost its clientele. After the observatory closed, it was used to house radio and television broadcasting equipment. Since 1986, the old observatory has housed the office of architects Harvey Morse and Cowperwood Interests. The stories above the 71st floor are designed mostly for exterior appearance, functioning mainly as landings for the stairway to the spire and do not contain office space. They are very narrow, have low and sloping roofs, and are only used to house radio transmitters and other mechanical and electrical equipment. For example, the 73rd floor houses the motors of the elevators and a water tank, of which are reserved for extinguishing fires. History. In the mid-1920s, New York's metropolitan area surpassed London's as the world's most populous metropolitan area and its population exceeded ten million by the early 1930s. The era was characterized by profound social and technological changes. Consumer goods such as radio, cinema, and the automobile became widespread. In 1927, Walter Chrysler's automotive company, the Chrysler Corporation, became the third-largest car manufacturer in the United States, behind Ford and General Motors. The following year, Chrysler was named "Time" magazine's "Person of the Year". The economic boom of the 1920s and speculation in the real estate market fostered a wave of new skyscraper projects in New York City. The Chrysler Building was built as part of an ongoing building boom that resulted in the city having the world's tallest building from 1908 to 1974. Following the end of World War I, European and American architects came to see simplified design as the epitome of the modern era and Art Deco skyscrapers as symbolizing progress, innovation, and modernity. The 1916 Zoning Resolution restricted the height that street-side exterior walls of New York City buildings could rise before needing to be setback from the street. This led to the construction of Art Deco structures in New York City with significant setbacks, large volumes, and striking silhouettes that were often elaborately decorated. Art Deco buildings were constructed for only a short period of time; but because that period was during the city's late-1920s real estate boom, the numerous skyscrapers built in the Art Deco style predominated in the city skyline, giving it the romantic quality seen in films and plays. The Chrysler Building project was shaped by these circumstances. Development. Originally, the Chrysler Building was to be the Reynolds Building, a project of real estate developer and former New York state senator William H. Reynolds. Prior to his involvement in planning the building, Reynolds was best known for developing Coney Island's Dreamland amusement park. When the amusement park was destroyed by a fire in 1911, Reynolds turned his attention to Manhattan real estate, where he set out to build the tallest building in the world. Planning. In 1921, Reynolds rented a large plot of land at the corner of Lexington Avenue and 42nd Street with the intention of building a tall building on the site. Reynolds did not develop the property for several years, prompting the Cooper Union to try to increase the assessed value of the property in 1924. The move, which would force Reynolds to pay more rent, was unusual because property owners usually sought to decrease their property assessments and pay fewer taxes. Reynolds hired the architect William Van Alen to design a forty-story building there in 1927. Van Alen's original design featured many Modernist stylistic elements, with glazed, curved windows at the corners. Van Alen was respected in his field for his work on the Albemarle Building at Broadway and 24th Street, designing it in collaboration with his partner H. Craig Severance. Van Alen and Severance complemented each other, with Van Alen being an original, imaginative architect and Severance being a shrewd businessperson who handled the firm's finances. The relationship between them became tense over disagreements on how best to run the firm. A 1924 article in the "Architectural Review", praising the Albemarle Building's design, had mentioned Van Alen as the designer in the firm and ignored Severance's role. The architects' partnership dissolved acrimoniously several months later, with lawsuits over the firm's clients and assets lasting over a year. The rivalry influenced the design of the future Chrysler Building, since Severance's more traditional architectural style would otherwise have restrained Van Alen's more modern outlook. Refinement of designs. By February 2, 1928, the proposed building's height had been increased to 54 stories, which would have made it the tallest building in Midtown. The proposal was changed again two weeks later, with official plans for a 63-story building. A little more than a week after that, the plan was changed for the third time, with two additional stories added. By this time, 42nd Street and Lexington Avenue were both hubs for construction activity, due to the removal of the Third Avenue Elevated's 42nd Street spur, which was seen as a blight on the area. The adjacent 56-story Chanin Building was also under construction. Because of the elevated spur's removal, real estate speculators believed that Lexington Avenue would become the "Broadway of the East Side", causing a ripple effect that would spur developments farther east. In April 1928, Reynolds signed a 67-year lease for the plot and finalized the details of his ambitious project. Van Alen's original design for the skyscraper called for a base with first-floor showroom windows that would be triple-height, and above would be 12 stories with glass-wrapped corners, to create the impression that the tower was floating in mid-air. Reynolds's main contribution to the building's design was his insistence that it have a metallic crown, despite Van Alen's initial opposition; the metal-and-crystal crown would have looked like "a jeweled sphere" at night. Originally, the skyscraper would have risen , with 67 floors. These plans were approved in June 1928. Van Alen's drawings were unveiled in the following August and published in a magazine run by the American Institute of Architects (AIA). Reynolds ultimately devised an alternate design for the Reynolds Building, which was published in August 1928. The new design was much more conservative, with an Italianate dome that a critic compared to Governor Al Smith's bowler hat, and a brick arrangement on the upper floors that simulated windows in the corners, a detail that remains in the current Chrysler Building. This design almost exactly reflected the shape, setbacks, and the layout of the windows of the current building, but with a different dome. With the design complete, groundbreaking for the Reynolds Building took place on September 19, 1928, but by late 1928, Reynolds did not have the means to carry on construction. Chrysler's plans and restart of construction. Walter Chrysler offered to buy the building in early October 1928, and Reynolds sold the plot, lease, plans, and architect's services to Chrysler on October 15, 1928, for more than $2.5 million. That day, the Goodwin Construction Company began demolition of what had been built. A contract was awarded on October 28, and demolition was completed on November 9. Chrysler's initial plans for the building were similar to Reynolds's, but with the 808-foot building having 68 floors instead of 67. The plans entailed a ground-floor pedestrian arcade; a facade of stone below the fifth floor and brick-and-terracotta above; and a three-story bronze-and-glass "observation dome" at the top. However, Chrysler wanted a more progressive design, and he worked with Van Alen to redesign the skyscraper to be tall. At the new height, Chrysler's building would be taller than the Woolworth Building, a building in lower Manhattan that was the world's tallest at the time. At one point, Chrysler had requested that Van Alen shorten the design by ten floors, but reneged on that decision after realizing that the increased height would also result in increased publicity. From late 1928 to early 1929, modifications to the design of the dome continued. In March 1929, the press published details of an "artistic dome" that had the shape of a giant thirty-pointed star, which would be crowned by a sculpture five meters high. The final design of the dome included several arches and triangular windows. Lower down, various architectural details were modeled after Chrysler automobile products, such as the hood ornaments of the Plymouth (see ). The building's gargoyles on the 31st floor and the eagles on the 61st floor, were created to represent flight, and to embody the machine age of the time. Even the topmost needle was built using a process similar to one Chrysler used to manufacture his cars, with precise "hand craftmanship". In his autobiography, Chrysler says he suggested that his building be taller than the Eiffel Tower. Meanwhile, excavation of the new building's foundation began in mid-November 1928 and was completed in mid-January 1929, when bedrock was reached. A total of of rock and of soil were excavated for the foundation, equal to 63% of the future building's weight. Construction of the building proper began on January 21, 1929. The Carnegie Steel Company provided the steel beams, the first of which was installed on March 27; and by April 9, the first upright beams had been set into place. The steel structure was "a few floors" high by June 1929, 35 floors high by early August, and completed by September. Despite a frantic steelwork construction pace of about four floors per week, no workers died during the construction of the skyscraper's steelwork. Chrysler lauded this achievement, saying, "It is the first time that any structure in the world has reached such a height, yet the entire steel construction was accomplished without loss of life". In total, 391,881 rivets were used, and approximately 3,826,000 bricks were laid to create the non-loadbearing walls of the skyscraper. Walter Chrysler personally financed the construction with his income from his car company. The Chrysler Building's height officially surpassed the Woolworth's on October 16, 1929, thereby becoming the world's tallest structure. Competition for "world's tallest building" title. The same year that the Chrysler Building's construction started, banker George L. Ohrstrom proposed the construction of a 47-story office building at 40 Wall Street downtown, designed by Van Alen's former partner Severance. Shortly thereafter, Ohrstrom expanded his project to 60 floors, but it was still shorter than the Woolworth and Chrysler buildings. That April, Severance increased 40 Wall's height to with 62 floors, exceeding the Woolworth's height by and the Chrysler's by . 40 Wall Street and the Chrysler Building started competing for the title of "world's tallest building". The Empire State Building, on 34th Street and Fifth Avenue, entered the competition in 1929. The race was defined by at least five other proposals, although only the Empire State Building would survive the Wall Street Crash of 1929. The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, which helped fuel the building boom in major cities. Van Alen expanded the Chrysler Building's height to , prompting Severance to increase the height of 40 Wall Street to in April 1929. Construction of 40 Wall Street began that May and was completed twelve months later. In response, Van Alen obtained permission for a spire. He had it secretly constructed inside the frame of the Chrysler Building, ensuring that Severance did not know the Chrysler Building's ultimate height until the end. The spire was delivered to the site in four sections. On October 23, 1929, one week after the Chrysler Building surpassed the Woolworth Building's height and one day before the Wall Street Crash of 1929, the spire was assembled. According to one account, "the bottom section of the spire was hoisted to the top of the building's dome and lowered into the 66th floor of the building." Then, within 90 minutes the rest of the spire's pieces were raised and riveted in sequence, raising the tower to 1,046 feet. Van Alen, who witnessed the process from the street along with its engineers and Walter Chrysler, compared the experience to watching a butterfly leaving its cocoon. In the October 1930 edition of "Architectural Forum", Van Alen explained the design and construction of the crown and needle: The steel tip brought the Chrysler Building to a height of , greatly exceeding 40 Wall Street's height. Contemporary news media did not write of the spire's erection, nor were there any press releases celebrating the spire's erection. Even the "New York Herald Tribune", which had virtually continuous coverage of the tower's construction, did not report on the spire's installation until days after the spire had been raised. Chrysler realized that his tower's height would exceed the Empire State Building's as well, having ordered Van Alen to change the Chrysler's original roof from a stubby Romanesque dome to the narrow steel spire. However, the Empire State's developer John J. Raskob reviewed the plans and realized that he could add five more floors and a spire of his own to his 80-story building and acquired additional plots to support that building's height extension. Two days later, the Empire State Building's co-developer, former governor Al Smith, announced the updated plans for that skyscraper, with an observation deck on the 86th-floor roof at a height of , higher than the Chrysler's 71st-floor observation deck at . Completion. In January 1930, it was announced that the Chrysler Corporation would maintain satellite offices in the Chrysler Building during Automobile Show Week. The skyscraper was never intended to become the Chrysler Corporation's headquarters, which remained in Detroit. The first leases by outside tenants were announced in April 1930, before the building was officially completed. The building was formally opened on May 27, 1930, in a ceremony that coincided with the 42nd Street Property Owners and Merchants Association's meeting that year. In the lobby of the building, a bronze plaque that read "in recognition of Mr. Chrysler's contribution to civic advancement" was unveiled. Former Governor Smith, former Assemblyman Martin G. McCue, and 42nd Street Association president George W. Sweeney were among those in attendance. By June, it was reported that 65% of the available space had been leased. By August, the building was declared complete, but the New York City Department of Construction did not mark it as finished until February 1932. The added height of the spire allowed the Chrysler Building to surpass 40 Wall Street as the tallest building in the world and the Eiffel Tower as the tallest structure. The Chrysler Building was thus the first man-made structure to be taller than and, by extension, the world's first supertall skyscraper. As one newspaper noted, the tower was also taller than the highest points of five states. The tower remained the world's tallest for 11 months after its completion. The Chrysler Building was appraised at $14 million, but was exempt from city taxes per an 1859 law that gave tax exemptions to sites owned by the Cooper Union. The city had attempted to repeal the tax exemption, but Cooper Union had opposed that measure. Because the Chrysler Building retains the tax exemption, it has paid Cooper Union for the use of their land since opening. While the Chrysler Corporation was a tenant, it was not involved in the construction or ownership of the Chrysler Building; rather, the tower was a project of Walter P. Chrysler for his children. In his autobiography, Chrysler wrote that he wanted to erect the building "so that his sons would have something to be responsible for". Van Alen's satisfaction at these accomplishments was likely muted by Walter Chrysler's later refusal to pay the balance of his architectural fee. Chrysler alleged that Van Alen had received bribes from suppliers, and Van Alen had not signed any contracts with Walter Chrysler when he took over the project. Van Alen sued and the courts ruled in his favor, requiring Chrysler to pay Van Alen $840,000, or six percent of the total budget of the building. However, the lawsuit against Chrysler markedly diminished Van Alen's reputation as an architect, which, along with the effects of the Great Depression and negative criticism, ended up ruining his career. Van Alen ended his career as professor of sculpture at the nearby Beaux-Arts Institute of Design and died in 1954. According to author Neal Bascomb, "The Chrysler Building was his greatest accomplishment, and the one that guaranteed his obscurity." The Chrysler Building's distinction as the world's tallest building was short-lived. John Raskob realized the 1,050-foot Empire State Building would only be taller than the Chrysler Building, and Raskob was afraid that Walter Chrysler might try to "pull a trick like hiding a rod in the spire and then sticking it up at the last minute." Another revision brought the Empire State Building's roof to , making it the tallest building in the world by far when it opened on May 1, 1931. However, the Chrysler Building is still the world's tallest steel-supported brick building. The Chrysler Building fared better commercially than the Empire State Building did: by 1935, the Chrysler had already rented 70 percent of its floor area. By contrast, Empire State had only leased 23 percent of its space and was popularly derided as the "Empty State Building". Use. 1940s to 1960s. The Chrysler family inherited the property after the death of Walter Chrysler in 1940, with the property being under the ownership of W.P. Chrysler Building Corporation. In 1944, the corporation filed plans to build a 38-story annex to the east of the building, at 666 Third Avenue. In 1949, this was revised to a 32-story annex costing $9 million. The annex building, designed by Reinhard, Hofmeister & Walquist, had a facade similar to that of the original Chrysler Building. The stone for the original building was no longer manufactured, and had to be specially replicated. Construction started on the annex in June 1950, and the first tenants started leasing in June 1951. The building itself was completed by 1952, and a sky bridge connecting the two buildings' seventh floors was built in 1959. The family sold the building in 1953 to William Zeckendorf for its assessed price of $18 million. The 1953 deal included the annex and the nearby Graybar Building, which, along with the Chrysler Building, sold for a combined $52 million. The new owners were Zeckendorf's company Webb and Knapp, who held a 75% interest in the sale, and the Graysler Corporation, who held a 25% stake. At the time, it was reported to be the largest real estate sale in New York City's history. In 1957, the Chrysler Building, its annex, and the Graybar Building were sold for $66 million to Lawrence Wien's realty syndicate, setting a new record for the largest sale in the city. In 1960, the complex was purchased by Sol Goldman and Alex DiLorenzo, who received a mortgage from the Massachusetts Mutual Life Insurance Company. The next year, the building's stainless steel elements, including the needle, crown, gargoyles, and entrance doors, were polished for the first time. A group of ten workers steam-cleaned the facade below the 30th floor, and manually cleaned the portion of the tower above the 30th floor, for a cost of about $200,000. Under Goldman and DiLorenzo's operation, the building began to develop leaks and cracked walls, and about of garbage piled up in the basement. The scale of the deterioration led one observer to say that the Chrysler Building was being operated "like a tenement in the South Bronx". The Chrysler Building remained profitable until 1974, when the owners faced increasing taxes and fuel costs. 1970s to mid-1990s. Foreclosure proceedings against the building began in August 1975, when Goldman and DiLorenzo defaulted on the $29 million first mortgage and a $15 million second mortgage. The building was about 17 percent vacant at the time. Massachusetts Mutual acquired the Chrysler Building for $35 million, purchasing all the outstanding debt on the building via several transactions. The next year, the Chrysler Building was designated as a National Historic Landmark. Texaco, one of the building's major tenants, was relocating to Westchester County, New York, by then, vacating hundreds of thousands of square feet at the Chrysler Building. In early 1978, Mass Mutual devised plans to renovate the facade, heating, ventilation, air-conditioning, elevators, lobby murals, and Cloud Club headquarters for $23 million. At a press conference announcing the renovation, mayor Ed Koch proclaimed that "the steel eagles and the gargoyles of the Chrysler Building are all shouting the renaissance of New York". Massachusetts Mutual had hired Josephine Sokolski, who had proposed modifying Van Alen's original lobby design substantially. After the renovation was announced, the New York City Landmarks Preservation Commission (LPC) considered designating the Chrysler Building as a city landmark. Though Mass Mutual had proclaimed "sensitivity and respect" for the building's architecture, it had opposed the city landmark designation, concerned that the designation would hinder leasing. At the time, the building had of vacant floor space, representing 40% of the total floor area. The owners hired the Edward S. Gordon Company as the building's leasing agent, and the firm leased of vacant space within five years. The LPC designated the lobby and facade as city landmarks in September 1978. Massachusetts Mutual had hired Josephine Sokolski to renovate the lobby, but the LPC objected that many aspects of Sokolski's planned redesign had deviated too much from Van Alen's original design. As a result of these disputes, the renovation of the lobby was delayed.The building was sold again in August 1979, this time to entrepreneur and Washington Redskins owner Jack Kent Cooke, in a deal that also transferred ownership of the Los Angeles Kings and Lakers to Jerry Buss. At the time, the building was 96 percent occupied. The new owners hired Kenneth Kleiman of Descon Interiors to redesign the lobby and elevator cabs in a style that was much closer to Van Alen's original design. Cooke also oversaw the completion of a lighting scheme at the pinnacle, which had been part of the original design but was never completed. The lighting system, consisting of 580 fluorescent tubes installed within the triangular windows of the top stories, was first illuminated in September 1981. Cooke next hired Hoffman Architects to restore the exterior and spire from 1995 to 1996. The joints in the now-closed observation deck were polished, and the facade restored, as part of a $1.5 million project. Some damaged steel strips of the needle were replaced and several parts of the gargoyles were re-welded together. The cleaning received the New York Landmarks Conservancy's Lucy G. Moses Preservation Award for 1997. Cooke died in April 1997, and his mortgage lender Fuji Bank moved to foreclose on the building the next month. Shortly after Fuji announced its intent to foreclose, several developers and companies announced that they were interested in buying the building. Ultimately, 20 potential buyers submitted bids to buy the Chrysler Building and several adjacent buildings. Late 1990s to 2010s. Tishman Speyer Properties and the Travelers Insurance Group won the right to buy the building in November 1997, having submitted a bid for about $220 million (equal to $ million in ). Tishman Speyer had negotiated a 150-year lease from the Cooper Union, which continued to own the land under the Chrysler Building. In 1998, Tishman Speyer announced that it had hired Beyer Blinder Belle to renovate the building and incorporate it into a commercial complex known as the Chrysler Center. As part of this project, EverGreene Architectural Arts restored the "Transport and Human Endeavor" mural in the lobby, which had been covered up during the late-1970s renovation. The renovation cost $100 million. In 2001, a 75 percent stake in the building was sold for US$300 million (equal to $ million in ) to TMW, the German arm of an Atlanta-based investment fund. The building was 95 percent occupied by 2005. In June 2008, it was reported that the Abu Dhabi Investment Council was in negotiations to buy TMW's 75 percent ownership stake, Tishman Speyer's 15 percent stake, and a share of the Trylons retail structure next door for US$800 million. The transaction was completed the next month, and the Abu Dhabi Investment Council assumed a 90 percent stake in the building, with Tishman Speyer retaining 10 percent. Tishman continued to manage the building and paid the Cooper Union $7.5 million a year. From 2010 to 2011, the building's energy, plumbing, and waste management systems were renovated. This resulted in a 21 percent decrease in the building's total energy consumption and 64 percent decrease in water consumption. In addition, 81 percent of waste was recycled. In 2012, the building received a LEED Gold accreditation from the U.S. Green Building Council, which recognized the building's environmental sustainability and energy efficiency. RFR Holding operation. The Abu Dhabi Investment Council and Tishman Speyer put the Chrysler Building's leasehold for sale again in January 2019. That March, the media reported that Aby Rosen's RFR Holding LLC, in a joint venture with the Austrian Signa Group, had reached an agreement to purchase the leasehold at a steeply discounted $150 million. In exchange, Rosen had to pay the Cooper Union $32.5 million a year, a steep increase from the rate the previous leaseholders had paid. Rosen initially planned to convert the building into a hotel, but he dropped these plans in April 2019, citing difficulties with the ground lease. Rosen then announced plans for an observation deck on the 61st-story setback, which the LPC approved in May 2020. He also wanted to reopen the Cloud Club and attract multiple restaurateurs. Rosen sought to renegotiate the terms of his ground lease with Cooper Union in 2020, and he evicted storeowners from all of the building's shops in an ultimately unsuccessful attempt to renovate the retail space. To attract tenants following the onset of the COVID-19 pandemic in New York City in 2020, he converted the Chrysler Building's ground-floor space into a tenant amenity center. RFR estimated that it had spent $170 million to renovate the building. RFR and Signa attempted to restructure the ground lease again in 2021 and 2023, both times without success. By then, according to an anonymous source cited by "Curbed", RFR was losing an estimated $1 million a month from the Chrysler Building's operation. In December 2023, Signa's creditors ordered the company to sell its stake in the Chrysler Building, following Signa's insolvency. RFR offered to buy Signa's ownership stake for a nominal fee of $1. Meanwhile, RFR sought to lease the building's retail space to luxury stores, signing their first luxury tenant in March 2024. By mid-2024, the building was aging significantly, and RFR had listed about of the Chrysler Building's office space as being "immediately available for rent". "The New York Times" reported that employees had complained about pest infestations, fountains with brown water, weak cellular reception, elevator delays, and poor natural lighting. Additionally, it would cost millions of dollars to upgrade the building to meet modern energy-efficiency codes. The Cooper Union moved to terminate RFR's ground lease of the Chrysler Building in September 2024, and RFR sued the college to prevent the termination of its leasehold. In its lawsuit, RFR claimed that the Cooper Union had driven away some tenants and had directed other tenants to make rent payments to the college rather than to RFR. Subsequently, the Cooper Union requested that RFR be evicted, and a state judge ordered tenants to pay rent to the Cooper Union that October. RFR's lease was ultimately terminated in January 2025, and the Cooper Union began seeking buyers for the building's ground lease that May. Chrysler Center. Chrysler Center is the building complex consisting of the Chrysler Building to the west, Chrysler Building East to the east, and the Chrysler Trylons commercial pavilion in the middle. After Tishman Speyer had acquired the entire complex, the firm renovated it completely from 1998 to 2000. The structure at 666 Third Avenue, known as the Kent Building at the time, was renovated and renamed Chrysler Building East. This International Style building, built in 1952, is high and has 32 floors. The mechanical systems were modernized and the interior was modified. Postmodern architect Philip Johnson designed a new facade of dark-blue glass, which was placed about in front of the Kent Building's existing facade. The structure did not resemble its western neighbor; Johnson explained that he did not "even like the architecture" of the Chrysler Building, despite acknowledging it as "the most loved building in New York". His design also included a extension. which surrounded the elevator core on the western end of the original Kent Building. The expansion used of unused air rights above the buildings in the middle of the block. The Kent Building was not a New York City designated landmark, unlike the Chrysler Building, so its renovation did not require the LPC's approval. After the addition, the total area of the Kent building was . A new building, also designed by Philip Johnson, was built between the original skyscraper and the annex. This became the Chrysler Trylons, a commercial pavilion three stories high with a retail area of . Its design consists of three triangular glass "trylons" measuring , , and tall; each is slanted in a different direction. The trylons are supported by vertical steel mullions measuring wide; between the mullions are 535 panes of reflective gray glass. The retail structures themselves are placed on either side of the trylons. Due to the complexity of the structural work, structural engineer Severud Associates built a replica at Rimouski, Quebec. Johnson designed the Chrysler Trylons as "a monument for 42nd Street [...] to give you the top of the Chrysler Building at street level." After these modifications, the total leasable area of the complex was . The total cost of this project was about one hundred million dollars. This renovation has won several awards and commendations, including an Energy Star rating from the Environmental Protection Agency; a LEED Gold designation; and the Skyscraper Museum Outstanding Renovation Award of 2001. Tenants. In January 1930, the Chrysler Corporation opened satellite offices in the Chrysler Building during Automobile Show Week. In addition to the Chrysler Salon product showroom on the first and second floors, the building had a lounge and a theater for showing films of Chrysler products. Other original large tenants included Time, Inc. and Texaco oil. Needing more office space, Time moved to Rockefeller Center in 1937. By October 1946, television transmitter equipment for CBS was located in the Chrysler Building spire,<ref name="gettyimages/1157692788"></ref> fed by cables from CBS television studios located nearby in the Grand Central Terminal building, above the former waiting room.<ref name="adweek/grand-central"></ref> In 1977, Texaco relocated to a more suburban workplace in Purchase, New York. In addition, the offices of Shaw Walker and J. S. Bache & Company were immediately atop the Chrysler Salon, while A. B. Dick, Pan American World Airways, Adams Hats, Schrafft's, and Florsheim Shoes also had offices in the building. By the 21st century, many of the Chrysler Building's tenants leased space there because of the building's historical stature, rather than because of its amenities. Notable tenants in the 21st century include: Impact. Reception. The completed Chrysler Building garnered mixed reviews in the press. Van Alen was hailed as the "Doctor of Altitude" by "Architect" magazine, while architect Kenneth Murchison called Van Alen the "Ziegfeld of his profession", comparing him to popular Broadway producer Florenz Ziegfeld Jr. The building was praised for being "an expression of the intense activity and vibrant life of our day", and for "teem[ing] with the spirit of modernism, ... the epitome of modern business life, stand[ing] for progress in architecture and in modern building methods." An anonymous critic wrote in "Architectural Forum" October 1930 issue: "The Chrysler...stands by itself, something apart and alone. It is simply the realization, the fulfillment in metal and masonry, of a one-man dream, a dream of such ambitions and such magnitude as to defy the comprehension and the criticism of ordinary men or by ordinary standards." Walter Chrysler himself regarded the building as a "monument to me". The journalist George S. Chappell called the Chrysler's design "distinctly a stunt design, evolved to make the man in the street look up". Douglas Haskell stated that the building "embodies no compelling, organic idea", and alleged that Van Alen had abandoned "some of his best innovations in behalf of stunts and new 'effects'". Others compared the Chrysler Building to "an upended swordfish", or claimed it had a "Little Nemo"-like design. Lewis Mumford, a supporter of the International Style and one of the foremost architectural critics of the United States at the time, despised the building for its "inane romanticism, meaningless voluptuousness, [and] void symbolism". The public also had mixed reviews of the Chrysler Building, as Murchison wrote: "Some think it's a freak; some think it's a stunt." The architectural professor Gail Fenske said that, although the Chrysler Building was criticized as "too theatrical" at the time of its completion, the general public quickly took a liking to "the city's crowning skyscraper". Later reviews were more positive. Architect Robert A. M. Stern wrote that the Chrysler Building was "the most extreme example of the [1920s and 1930s] period's stylistic experimentation", as contrasted with 40 Wall Street and its "thin" detailing. George H. Douglas wrote in 2004 that the Chrysler Building "remains one of the most appealing and awe-inspiring of skyscrapers". Architect Le Corbusier called the building "hot jazz in stone and steel". Architectural critic Ada Louise Huxtable stated that the building had "a wonderful, decorative, evocative aesthetic", while Paul Goldberger noted the "compressed, intense energy" of the lobby, the "magnificent" elevators, and the "magical" view from the crown. Anthony W. Robins said the Chrysler Building was "one-of-a-kind, staggering, romantic, soaring, the embodiment of 1920s skyscraper pizzazz, the great symbol of Art Deco New York". Kim Velsey of "Curbed" said that the building "is unabashedly over the top" because of "its steel gargoyles, Moroccan marble lobby, and illuminated spire". The LPC said that the tower "embodies the romantic essence of the New York City skyscraper". Pauline Frommer, in the travel guide "Frommer's", gave the building an "exceptional" recommendation, saying: "In the Chrysler Building we see the roaring-twenties version of what Alan Greenspan called 'irrational exuberance'—a last burst of corporate headquarter building before stocks succumbed to the thudding crash of 1929." As icon. The Chrysler Building appears in several films set in New York and is widely considered one of the most positively acclaimed buildings in the city. A 1996 survey of New York architects revealed it as their favorite, and "The New York Times" described it in 2005 as "the single most important emblem of architectural imagery on the New York skyline". In mid-2005, the Skyscraper Museum in Lower Manhattan asked 100 architects, builders, critics, engineers, historians, and scholars, among others, to choose their 10 favorites among 25 of the city's towers. The Chrysler Building came in first place, with 90 respondents placing it on their ballots. In 2007, the building ranked ninth among 150 buildings in the AIA's "List of America's Favorite Architecture". The building was included in the Lego Company's architecture set representing the New York City skyline. The Chrysler Building is widely heralded as an Art Deco icon. "Fodor's New York City 2010" described the building as being "one of the great art deco masterpieces" which "wins many a New Yorker's vote for the city's most iconic and beloved skyscraper". "Frommer's" states that the Chrysler was "one of the most impressive Art Deco buildings ever constructed". "Insight Guides" 2016 edition maintains that the Chrysler Building is considered among the city's "most beautiful" buildings. Its distinctive profile has inspired similar skyscrapers worldwide, including One Liberty Place in Philadelphia, Two Prudential Plaza in Chicago, and the Al Kazim Towers in Dubai. In addition, the New York-New York Hotel and Casino in Paradise, Nevada, contains the "Chrysler Tower", a replica of the Chrysler Building measuring 35 or 40 stories tall. A portion of the hotel's interior was also designed to resemble the Chrysler Building's interior. In media. While seen in many films, the Chrysler Building almost never appears as a main setting in them, prompting architect and author James Sanders to quip it should win "the Award for Best Supporting Skyscraper". The building was supposed to be featured in the 1933 film "King Kong", but only makes a cameo at the end thanks to its producers opting for the Empire State Building in a central role. The Chrysler Building appears in the background of "The Wiz" (1978); as the setting of much of "Q - The Winged Serpent" (1982); in the initial credits of "The Shadow of the Witness" (1987); and during or after apocalyptic events in "Independence Day" (1996), "Armageddon" (1998), "Deep Impact" (1998), "Godzilla" (1998), and "A.I. Artificial Intelligence" (2001). The building also appears in other films, such as "Spider-Man" (2002), "" (2007), "Two Weeks Notice" (2002), "The Sorcerer's Apprentice" (2010), "The Avengers" (2012) and "Men in Black 3" (2012). The building is mentioned in the number "It's the Hard Knock Life" for the musical "Annie", and it is the setting for the post-game content in the Squaresoft video game "Parasite Eve". In addition, the introductory scenes of the TV show "Sex and the City" depict the Chrysler Building. In December 1929, Walter Chrysler hired Margaret Bourke-White to take publicity images from a scaffold high. She was deeply inspired by the new structure and especially smitten by the massive eagle's-head figures projecting off the building. According to one account, Bourke-White wanted to live in the building for the duration of the photo shoot, but the only person able to do so was the janitor, so she was instead relegated to co-leasing a studio with Time Inc. In 1930, several of her photographs were used in a special report on skyscrapers in the then-new "Fortune" magazine. Bourke-White worked in a 61st-floor studio designed by John Vassos until she was evicted in 1934. That year, Bourke-White's partner Oscar Graubner took a famous photo called "Margaret Bourke-White atop the Chrysler Building", which depicts her taking a photo of the city's skyline while sitting on one of the 61st-floor eagle ornaments. On October 5, 1998, Christie's auctioned the photograph for $96,000. The Chrysler Building has been the subject of other photographs as well. During a January 1931 dance organized by the Society of Beaux-Arts, six architects, including Van Alen, were photographed while wearing costumes resembling the buildings that each architect designed. In 1991, the photographer Annie Leibovitz took pictures of the dancer David Parsons reclining on a ledge near the top of the building.
6790
1295272305
https://en.wikipedia.org/wiki?curid=6790
Cape Breton (disambiguation)
Cape Breton Island is an island in the Canadian province of Nova Scotia, in Canada. Cape Breton may also refer to:
6794
47033785
https://en.wikipedia.org/wiki?curid=6794
Comet Shoemaker–Levy 9
Comet Shoemaker–Levy 9 (formally designated D/1993 F2) was a comet that broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System. The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker, and David Levy in 1993. Shoemaker–Levy 9 (SL9) had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. It was the first active comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20 to 30 years earlier. Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull the comet apart. The comet was later observed as a series of fragments ranging up to in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately (Jupiter's escape velocity) or . The prominent scars from the impacts were more visible than the Great Red Spot and persisted for many months. Discovery. While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993, in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program. Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, thence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 26, 1993. The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could be a line-of-sight effect, its apparent motion in the sky suggested that the comet was physically close to the planet. Comet with a Jovian orbit. Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike any other comet then known. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of . Its orbit around the planet was highly eccentric ("e" = 0.9986). Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, Satoru Otomo on March 17, and a team led by Eleanor Helin from images on March 19. An image of the comet on a Schmidt photographic plate taken on March 19 was identified on March 21 by M. Lindgren, in a project searching for comets near Jupiter. However, as his team were expecting comets to be inactive or at best exhibit a weak dust coma, and SL9 had a peculiar morphology, its true nature was not recognised until the official announcement 5 days later. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt. The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere. When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricity—that is to say, the ellipse was nearly flattened out. The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over above its cloud tops—a smaller distance than Jupiter's radius of , and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from "fragment A" through to "fragment W", a practice already established from previously observed fragmented comets. More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days. Predictions for the collision. The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds. Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around ) to across, suggesting that the original comet may have had a nucleus up to across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal. Impacts. Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, the W. M. Keck Observatory, and the "Galileo" spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, "Galileo", then at a distance of from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions. Two other space probes made observations at the time of the impact: the "Ulysses" spacecraft, primarily designed for solar observations, was pointed toward Jupiter from its location away, and the distant "Voyager 2" probe, some from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range and make observations with its ultraviolet spectrometer. Astronomer Ian Morison described the impacts as following: The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the [comet's] nucleus slammed into Jupiter's southern hemisphere at about . Instruments on "Galileo" detected a fireball that reached a peak temperature of about , compared to the typical Jovian cloud-top temperature of about . It then expanded and cooled rapidly to about . The plume from the fireball quickly reached a height of over and was observed by the HST. A few minutes after the impact fireball was detected, "Galileo" measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact. Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact. Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over (almost one ) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet. Observations and discoveries. Chemical studies. Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon were detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough. Waves. As predicted, the collisions generated enormous waves that swept across Jupiter at speeds of and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere. Other observations. Radio observations revealed a sharp increase in continuum emission at a wavelength of after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts. About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole. Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits. "Voyager 2" failed to detect anything with calculations, showing that the fireballs were just below the craft's limit of detection; no abnormal levels of UV radiation or radio signals were registered after the blast. "Ulysses" also failed to detect any abnormal radio frequencies. Post-impact analysis. Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about ; the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about in diameter. These predictions were among the few that were actually confirmed by subsequent observation. One of the surprises of the impacts was the small amount of water revealed compared to predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter. Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in a meteor air burst probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached , well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer. Longer-term effects. The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts had ever been recorded before, or since. The impact produced many new species in the stratosphere of Jupiter. Long-lasting species are H2O, CO, CS and HCN. H2O emission was monitored between 2002 and 2019 with the Odin Space Telescope and showed a linear decline. Spectroscopic observers found that ammonia and carbon disulfide (CS2) persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere. CS was detected 19 years after the impact with the Atacama Submillimeter Telescope Experiment in the atmosphere of Jupiter. The CS total mass showed a 90% decrease. The new species can help to reveal the processes in Jupiter’s aurora. ALMA detected CO and HCN. In and near the auroral region HCN was depleted. Chemical processes bonds HCN on large aurora-produced aerosols. JWST observations from December 2022 detected an increase of H2O in the south polar region, while CO2 is depleted. This is seen as an exchange of oxygen between the two molecules in the southern auroral region. HCN is also depleted towards the south polar region. Atmospheric temperatures dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures 10 K (10 °C; 18 °F) higher than the surroundings persisted for almost two weeks. Global stratospheric temperatures rose immediately after the impacts, then fell to below pre-impact temperatures 2–3 weeks afterwards, before rising slowly to normal temperatures. Comet Shoemaker-Levy 9 also caused ripples in the faint ring system of Jupiter, which were first observed by "Galileo". 13 years later, the "New Horizons" spacecraft en route to Pluto also observed ripples, suggesting that subsequent events may have also tilted the rings. Additionally it is predicted that the comet could have formed a new ring around Jupiter. Frequency of impacts. SL9 is not unique in having orbited Jupiter for a time; five comets, including 82P/Gehrels, 147P/Kushida–Muramatsu, and 111P/Helin–Roman–Crockett, are known to have been temporarily captured by the planet. Cometary orbits around Jupiter are unstable, as they will be highly elliptical and likely to be strongly perturbed by the Sun's gravity at apojove (the farthest point on the orbit from the planet). By far the most massive planet in the Solar System, Jupiter can capture objects relatively frequently, but the size of SL9 makes it a rarity: one post-impact study estimated that comets in diameter impact the planet once in approximately 500 years and those in diameter do so just once in every 6,000 years. There is very strong evidence that comets have previously been fragmented and collided with Jupiter and its satellites. During the Voyager missions to the planet, planetary scientists identified 13 crater chains on Callisto and three on Ganymede, the origin of which was initially a mystery. Crater chains seen on the Moon often radiate from large craters, and are thought to be caused by secondary impacts of the original ejecta, but the chains on the Jovian moons did not lead back to a larger crater. The impact of SL9 strongly implied that the chains were due to trains of disrupted cometary fragments crashing into the satellites. Impact of July 19, 2009. On July 19, 2009, exactly 15 years after the SL9 impacts, a new black spot about the size of the Pacific Ocean appeared in Jupiter's southern hemisphere. Thermal infrared measurements showed the impact site was warm and spectroscopic analysis detected the production of excess hot ammonia and silica-rich dust in the upper regions of Jupiter's atmosphere. Scientists have concluded that another impact event had occurred, but this time a more compact and stronger object, probably a small undiscovered asteroid, was the cause. Jupiter's role in protection of the inner Solar System. The events of SL9's interaction with Jupiter greatly highlighted Jupiter's role in protecting the inner planets from both interstellar and in-system debris by acting as a "cosmic vacuum cleaner" for the Solar System (Jupiter barrier). The planet's strong gravitational influence attracts many small comets and asteroids and the rate of cometary impacts on Jupiter is thought to be between 2,000 and 8,000 times higher than the rate on Earth. The extinction of the non-avian dinosaurs at the end of the Cretaceous period is generally thought to have been caused by the Cretaceous–Paleogene impact event, which created the Chicxulub crater, demonstrating that cometary impacts are indeed a serious threat to life on Earth. Astronomers have speculated that without Jupiter's immense gravity, extinction events might have been more frequent on Earth and complex life might not have been able to develop. This is part of the argument used in the Rare Earth hypothesis. In 2009, it was shown that the presence of a smaller planet at Jupiter's position in the Solar System might increase the impact rate of comets on the Earth significantly. A planet of Jupiter's mass still seems to provide increased protection against asteroids, but the total effect on all orbital bodies within the Solar System is unclear. This and other recent models call into question the nature of Jupiter's influence on Earth impacts.
6796
332841
https://en.wikipedia.org/wiki?curid=6796
Ceres Brewery
The Ceres Brewery was a beer and soft drink producing facility in Århus, Denmark, that operated from 1856 until 2008. Although the brewery was closed by its owner Royal Unibrew the Ceres brand continues, with the product brewed at other facilities. The area where the brewery stood is being redeveloped for residential and commercial use and has been named CeresByen (Ceres City). History. "Ceres Brewery" was founded in 1856 by Malthe Conrad Lottrup, a grocer, with chemists "A. S. Aagard" and "Knud Redelien", as the city's seventh brewery. It was named after the Roman goddess Ceres, and its opening was announced in the local newspaper, "Århus Stiftstidende". Lottrup expanded the brewery after ten years, adding a grand new building as his private residence. He was succeeded by his son-in-law, Laurits Christian Meulengracht, who ran the brewery for almost thirty years, expanding it further before selling it to "Østjyske Bryggerier", another brewing firm. The Ceres brewery was named an official purveyor to the "Royal Danish Court" in 1914.
6799
35006187
https://en.wikipedia.org/wiki?curid=6799
COBOL
COBOL (; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural, and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages, or replaced with other software. COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC, designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly pressured computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023. COBOL statements have prose syntax such as , which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words compared to the succinct and mathematically inspired syntax of other languages. The COBOL code is split into four "divisions" (identification, environment, data, and procedure), containing a rigid hierarchy of sections, paragraphs, and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions, and just one class. COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses often result in monolithic programs that are hard to comprehend as a whole, despite their local readability. For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing. History and specification. Background. In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost US$600,000. At a time when new programming languages were proliferating, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster. On 8 April 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet, and Saul Gorn. At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had 175 more on order, and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs, and ease modernization. Charles Phillips agreed to sponsor the meeting, and tasked the delegation with drafting the agenda. COBOL 60. On 28 and 29 May 1959, a meeting was held at the Pentagon to discuss the creation of a common programming language for business (exactly one year after the Zürich ALGOL 58 meeting). It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs. Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program, and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent, and be easy to use, even at the expense of power. The meeting resulted in the creation of a steering committee and short, intermediate, and long-range committees. The short-range committee was given until September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages; it did not explicitly direct them to create a new language. The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap. The steering committee met on 4 June and agreed to name the entire activity the "Committee on Data Systems Languages", or CODASYL, and to form an executive committee. The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the U.S. Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the U.S. National Bureau of Standards. Work began by investigating data descriptions, statements, existing applications, and user experiences. The committee mainly examined the FLOW-MATIC, AIMACO, and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands, and the separation of data descriptions and instructions. Hopper is sometimes called "the mother of COBOL" or "the grandmother of COBOL", although Jean Sammet, a lead designer of COBOL, said Hopper "was not the mother, creator, or developer of Cobol." IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English. In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out.". Features from COMTRAN incorporated into COBOL included formulas, the clause, an improved codice_1 statement, which obviated the need for GO TOs, and a more robust file management system. The usefulness of the committee's work was a subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple. Controversial features included those some considered useless or too advanced for data processing users. Such features included Boolean expressions, formulas, and table "" (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time), and functions (thought of as purely mathematical and of no use in data processing). The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions," and Bob Bemer later described them as a "hodgepodge." The committee was given until December to improve it. At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language), and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion. In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it. This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste. It soon became apparent that the committee was too large to make any further progress quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure. A subcommittee was formed to analyze existing languages and was made up of six individuals: The subcommittee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification. The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as "COBOL 60". The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers. The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications. During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register, and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL. Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved. The relative influence of the languages that were used is still indicated in the recommended advisory printed in all COBOL reference manuals: COBOL-61 to COBOL-65. Many logical flaws were found in "COBOL 60", leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee performed a total cleanup, and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained. Early COBOL compilers were primitive and slow. COBOL is a difficult language to write a compiler for, due to the large syntax and many optional elements within syntactic constructs, as well as the need to generate efficient code for a language with many possible data representations, implicit type conversions, and necessary set-ups for I/O operations. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91. In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease. The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables. COBOL-68. Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced "USA Standard COBOL X3.23" in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972. COBOL-74. By 1970, COBOL had become the most widely used programming language in the world. Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970, and 1973, including changes such as new inter-program communication, debugging, and file merging facilities, as well as improved string handling and library inclusion features. Although CODASYL was independent of the ANSI committee, the "CODASYL Journal of Development" was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee. The Programming Language Committee was not well-known, however. The vice president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available. In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the statement and the segmentation module. Deleted features included the statement, the statement (which was replaced by ), and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL but was reinstated before the standard was published. ISO later adopted the updated standard in 1978. COBOL-85. In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user". During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard. ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals. In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new codice_2 statement and inline codice_3 were particularly well received and improved productivity, thanks to simplified control flow and debugging. The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed. In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985. Sixty features were changed or deprecated and 115 were added, such as: The new standard was adopted by all national standard bodies, including ANSI. Two amendments followed in 1989 and 1993. The first amendment introduced intrinsic functions and the other provided corrections. COBOL 2002 and object-oriented COBOL. In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs. In the early 1990s, work began on adding object-oriented programming in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk. The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002. Fujitsu/GTSoftware, Micro Focus introduced object-oriented COBOL compilers targeting the .NET Framework. There were many other new features, many of which had been in the "CODASYL COBOL Journal of Development" since 1978 and had missed the opportunity to be included in COBOL-85. These other features included: Three corrigenda were published for the standard: two in 2006 and one in 2009. COBOL 2014. Between 2003 and 2009, three Technical Reports (TRs) were produced describing object finalization, XML processing and collection classes for COBOL. COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced. COBOL 2014 includes the following changes: COBOL 2023. The COBOL 2023 standard added a few new features: There is as yet no known complete implementation of this standard. Legacy. COBOL programs are used globally in governments and various industries including retail, travel, finance, and healthcare. Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. COBOL currently runs on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually. As of 2020, COBOL ran background processes 95% of the time a credit or debit card was swiped. Y2K. Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as "24% of Y2K software repair costs to Cobol". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted". Modernization efforts. In 2006 and 2012, "Computerworld" surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said that they would do so if not for the expense of rewriting legacy code. Alternatively, some businesses have migrated their COBOL programs from mainframes to cheaper, faster hardware. By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite COBOL systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated. Several banks have undertaken multi-year COBOL modernization efforts, sometimes resulting in widespread service disruptions that result in fines. During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act. Features. Syntax. COBOL has an English-like syntax, which is used to describe nearly everything in COBOL programs. For example, a condition can be expressed as   or more concisely as    or  . More complex conditions can be abbreviated by removing repeated conditions and variables. For example,    can be shortened to . To support this syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more grammatically appropriate statements and clauses; e.g., the and keywords can be used interchangeably, as can and , and and . Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see ) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. ) and strings (e.g. ). Separators include the space character and commas and semi-colons followed by a space. A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs. Metalanguage. COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. As an example, consider the following description of an codice_23 statement: formula_1 This description permits the following variants: ADD 1 TO x ADD 1, a, b TO x ROUNDED, y, z ROUNDED ADD a, b TO c ON SIZE ERROR DISPLAY "Error" END-ADD ADD a TO b NOT SIZE ERROR DISPLAY "No error" ON SIZE ERROR DISPLAY "Error" Code format. The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well. COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were: In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column. COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using codice_24, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the codice_25 directive replaces the codice_26 indicator. Identification division. The identification division identifies the following code entity and contains the definition of a class or interface. Object-oriented programming. Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the statement, which acts similarly to , or through inline method invocation, which is analogous to using functions. INVOKE my-class "foo" RETURNING var MOVE my-class::"foo" TO var *> Inline method invocation COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a clause, which leaves external code no way to access it. Method overloading was added in COBOL 2014. Environment division. The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information. Files. COBOL supports three file formats, or ': sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, ', record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Other implementations are Record Management Services on OpenVMS and Enscribe on HPE NonStop (Tandem). Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access. A common non-standard extension is the " organization, used to process text files. Records in a file are terminated by a newline and may be of varying length. Data division. The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces. Aggregated data. Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called '. Items that have subordinate aggregate data are called '; those that do not are called ". Level-numbers used to describe standard data items are between 1 and 49. 01 some-record. *> Aggregate group record item 05 num PIC 9(10). *> Elementary item 05 the-date. *> Aggregate (sub)group record item 10 the-year PIC 9(4). *> Elementary item 10 the-month PIC 99. *> Elementary item 10 the-day PIC 99. *> Elementary item In the above example, elementary item and group item are subordinate to the record , while elementary items , , and are part of the group item . Subordinate items can be disambiguated with the (or ) keyword. For example, consider the example code above along with the following example: 01 sale-date. 05 the-year PIC 9(4). 05 the-month PIC 99. 05 the-day PIC 99. The names , , and are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the group, the programmer would use (or the equivalent ). This syntax is similar to the "dot notation" supported by most contemporary languages. Other data levels. A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated , is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use. 01 customer-record. 05 cust-key PIC X(10). 05 cust-name. 10 cust-first-name PIC X(30). 10 cust-last-name PIC X(30). 05 cust-dob PIC 9(8). 05 cust-balance PIC 9(7)V99. 66 cust-personal-details RENAMES cust-name THRU cust-dob. 66 cust-all-details RENAMES cust-name THRU cust-balance. A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, and , which are non-group data items that are independent of (not subordinate to) any other data items: 77 property-name PIC X(80). 77 sales-region PIC 9(5). An 88 level-number declares a " (a so-called 88-level) which is true when its parent data item contains one of the values specified in its clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the data item. When the data item contains a value of , the condition-name is true, whereas when it contains a value of or , the condition-name is true. If the data item contains some other value, both of the condition-names are false. 01 wage-type PIC X. 88 wage-is-hourly VALUE "H". 88 wage-is-yearly VALUE "S", "Y". Data types. Standard COBOL provides the following data types: Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type. PICTURE clause. A (or ) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a indicates a decimal digit, and an indicates that the item is signed. Other picture characters (called ' and ' characters) specify how an item should be formatted. For example, a series of characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, is equivalent to . Picture specifications containing only digit () and sign () characters define purely ' data items, while picture specifications containing alphabetic () or alphanumeric () characters define ' data items. The presence of other formatting characters define ' or ' data items. USAGE clause. The clause declares the format in which data is stored. Depending on the data type, it can either complement or be used instead of a clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are: Report writer. The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings. Reports are associated with report files, which are files which may only be written to through report writer statements. FD report-out REPORT sales-report. Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical ". Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records: RD sales-report PAGE LIMITS 60 LINES FIRST DETAIL 3 CONTROLS seller-name. 01 TYPE PAGE HEADING. 03 COL 1 VALUE "Sales Report". 03 COL 74 VALUE "Page". 03 COL 79 PIC Z9 SOURCE PAGE-COUNTER. 01 sales-on-day TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "Sales on". 03 COL 12 PIC 99/99/9999 SOURCE sales-date. 03 COL 21 VALUE "were". 03 COL 26 PIC $$$$9.99 SOURCE sales-amount. 01 invalid-sales TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "INVALID RECORD:". 03 COL 19 PIC X(34) SOURCE sales-record. 01 TYPE CONTROL HEADING seller-name, LINE + 2. 03 COL 1 VALUE "Seller:". 03 COL 9 PIC X(30) SOURCE seller-name. The above report description describes the following layout: Four statements control the report writer: , which prepares the report writer for printing; , which prints a report group; , which suppresses the printing of a report group; and , which terminates report processing. For the above sales report example, the procedure division might look like this: OPEN INPUT sales, OUTPUT report-out INITIATE sales-report PERFORM UNTIL 1 <> 1 READ sales AT END EXIT PERFORM END-READ VALIDATE sales-record IF valid-record GENERATE sales-on-day ELSE GENERATE invalid-sales END-IF END-PERFORM TERMINATE sales-report CLOSE sales, report-out Use of the Report Writer facility tends to vary considerably; some organizations use it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime. Procedure division. Procedures. The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections. Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the verb is used. A statement somewhat resembles a procedure call in a newer languages in the sense that execution returns to the code following the statement at the end of the called code; however, it does not provide a mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like , then control returns at the end of the called procedure. However, is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the construct: PROCEDURE so-and-so. PERFORM ALPHA PERFORM ALPHA THRU GAMMA STOP RUN. ALPHA. DISPLAY 'A'. BETA. DISPLAY 'B'. GAMMA. DISPLAY 'C'. The output of this program will be: "A A B C". also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being 'ed may execute a statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a invocation that was called earlier but has not yet completed, the COBOL 2002 standard stipulates that the behavior is undefined. The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no statements happen, control flows from top to bottom through the program. But when a statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways. The following example (taken from ) illustrates the problem: LABEL1. DISPLAY '1' PERFORM LABEL2 THRU LABEL3 STOP RUN. LABEL2. DISPLAY '2' PERFORM LABEL3 THRU LABEL4. LABEL3. DISPLAY '3'. LABEL4. DISPLAY '4'. One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first statement sets the continuation address at the end of so that it will jump back to the call site inside . The second statement sets the return at the end of but does not modify the continuation address of , expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of , it jumps back to the outer statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable. A special consequence of this limitation is that cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from ): MOVE 1 TO A PERFORM LABEL STOP RUN. LABEL. DISPLAY A IF A < 3 ADD 1 TO A PERFORM LABEL END-IF DISPLAY 'END'. One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But other compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to . Statements. COBOL 2014 has 47 statements (also called "), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section. Control flow. COBOL's conditional statements are and . is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe: EVALUATE TRUE ALSO desired-speed ALSO current-speed WHEN lid-closed ALSO min-speed THRU max-speed ALSO LESS THAN desired-speed PERFORM speed-up-machine WHEN lid-closed ALSO min-speed THRU max-speed ALSO GREATER THAN desired-speed PERFORM slow-down-machine WHEN lid-open ALSO ANY ALSO NOT ZERO PERFORM emergency-stop WHEN OTHER CONTINUE END-EVALUATE The statement is used to define loops which are executed a condition is true (not true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). and call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). unloads subprograms from memory. causes the program to jump to a specified procedure. The statement is a return statement and the statement stops the program. The statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure. Exceptions are raised by a statement and caught with a handler, or ", defined in the portion of the procedure division. Declaratives are sections beginning with a statement which specify the errors to handle. Exceptions can be names or objects. is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the . Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected. I/O. File I/O is handled by the self-describing , , , and statements along with a further three: , which updates a record; , which selects subsequent records to access by finding a record with a certain key; and , which releases a lock on the last record accessed. User interaction is done using and . Data manipulation. The following verbs manipulate data: Files and tables are sorted using and the verb merges and sorts files. The verb provides records to sort and retrieves sorted records in order. Scope termination. Some statements, such as and , may themselves contain statements. Such statements may be terminated in two ways: by a period (""), which terminates "all" unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement. IF invalid-record IF no-more-records NEXT SENTENCE ELSE READ record-file AT END SET no-more-records TO TRUE. IF invalid-record IF no-more-records CONTINUE ELSE READ record-file AT END SET no-more-records TO TRUE END-READ END-IF END-IF Nested statements terminated with a period are a common source of bugs. For example, examine the following code: IF x DISPLAY y. DISPLAY z. Here, the intent is to display codice_30 and codice_31 if condition codice_32 is true. However, codice_31 will be displayed whatever the value of codice_32 because the codice_1 statement is terminated by an erroneous period after . Another bug is a result of the dangling else problem, when two codice_1 statements can associate with an codice_37. IF x IF y DISPLAY a ELSE DISPLAY b. In the above fragment, the codice_37 associates with the    statement instead of the    statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require    to be placed after the inner codice_1. Self-modifying code. The original (1959) COBOL specification supported the infamous    statement, for which many compilers generated self-modifying code. codice_40 and codice_41 are procedure labels, and the single    statement in procedure codice_40 executed after such an statement means    instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002. The statement was poorly regarded because it undermined "locality of context" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when "someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer." Hello, world. A "Hello, World!" program in COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. hello-world. PROCEDURE DIVISION. DISPLAY "Hello, world!" When the now famous "Hello, World!" program example in "The C Programming Language" was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, "with an empty DATA DIVISION", was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters. //COBUCLG JOB (001),'COBOL BASE TEST', 00010000 // CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) 00020000 //BASETEST EXEC COBUCLG 00030000 //COB.SYSIN DD * 00040000 00000* VALIDATION OF BASE COBOL INSTALL 00050000 01000 IDENTIFICATION DIVISION. 00060000 01100 PROGRAM-ID. 'HELLO'. 00070000 02000 ENVIRONMENT DIVISION. 00080000 02100 CONFIGURATION SECTION. 00090000 02110 SOURCE-COMPUTER. GNULINUX. 00100000 02120 OBJECT-COMPUTER. HERCULES. 00110000 02200 SPECIAL-NAMES. 00120000 02210 CONSOLE IS CONSL. 00130000 03000 DATA DIVISION. 00140000 04000 PROCEDURE DIVISION. 00150000 04100 00-MAIN. 00160000 04110 DISPLAY 'HELLO, WORLD' UPON CONSL. 00170000 04900 STOP RUN. 00180000 //LKED.SYSLIB DD DSNAME=SYS1.COBLIB,DISP=SHR 00190000 // DD DSNAME=SYS1.LINKLIB,DISP=SHR 00200000 //GO.SYSPRINT DD SYSOUT=A 00210000 // 00220000 After submitting the JCL, the MVS console displayed: 19.52.48 JOB 3 $HASP100 COBUCLG ON READER1 COBOL BASE TEST 19.52.48 JOB 3 IEF677I WARNING MESSAGE(S) FOR JOB COBUCLG ISSUED 19.52.48 JOB 3 $HASP373 COBUCLG STARTED - INIT 1 - CLASS A - SYS BSP1 19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING 19.52.48 JOB 3 IEC130I SYSLIB DD STATEMENT MISSING 19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING 19.52.48 JOB 3 IEFACTRT - Stepname Procstep Program Retcode 19.52.48 JOB 3 COBUCLG BASETEST COB IKFCBL00 RC= 0000 19.52.48 JOB 3 COBUCLG BASETEST LKED IEWL RC= 0000 19.52.48 JOB 3 +HELLO, WORLD 19.52.48 JOB 3 COBUCLG BASETEST GO PGM=*.DD RC= 0000 19.52.48 JOB 3 $HASP395 COBUCLG ENDED "Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output". The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL. Reception. Lack of structure. In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published in 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind". In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training. One cause of spaghetti code was the statement. Attempts to remove s from COBOL code, however, resulted in convoluted programs and reduced code quality. s were largely replaced by the statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand. COBOL programs were infamous for being monolithic and lacking modularization. COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake. Another complication stemmed from the ability to a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule. This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included. Nevertheless, much important legacy COBOL software uses unstructured code, which has become practically unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways. Compatibility issues. COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 possible variants. COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs. Verbose syntax. COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers. The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code and the main changes in COBOL-85 were there to help ease maintenance. Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax. Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them". By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems. In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it. Concerns about the design process. Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent." Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence. COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard. Influences on other languages. COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the codice_43 clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays. codice_27 data declarations were incorporated into PL/I, with minor changes. COBOL's facility, although considered "primitive", influenced the development of include directives. The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular.
6801
1290323001
https://en.wikipedia.org/wiki?curid=6801
Crew
A crew is a body or a group of people who work at a common activity, generally in a structured or hierarchical organization. A location in which a crew works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical usage strongly distinguishes officers from crew, though the two groups combined form the ship's company. Members of a crew are often referred to by the titles "crewmate", "crewman" or "crew-member." "Crew" also refers to the sport of rowing, where teams row competitively in racing shells.
6804
1297657345
https://en.wikipedia.org/wiki?curid=6804
Charge-coupled device
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging. Overview. In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors. History. The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices. In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analog of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices". The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s. The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent () on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971. The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2D 100 × 100 pixel device. Peter Dillon, a scientist at Kodak Research Labs, invented the first color CCD image sensor by overlaying a color filter array on this Fairchild 100 x 100 pixel Interline CCD starting in 1974. Steven Sasson, an electrical engineer working for the Kodak Apparatus Division, invented a digital still camera using this same Fairchild CCD in 1975. The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981. The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array ( pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981. Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors. In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers". Basics of operation. In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking). An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing. Detailed physics of operation. Charge generation. Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly "p"-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an "n" channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified: The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel. CCDs are normally susceptible to ionizing radiation and energetic particles which causes noise in the output of the CCD, and this must be taken into consideration in satellites using CCDs. Design and manufacturing. The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly "p" doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device: This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD. The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate. Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region. Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions. Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible). The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device. CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices. Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets. Architecture. The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. These architectures differ primarily in their approach to the problem of shuttering. In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out. With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much. The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design. The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, or power-intensive mechanical shutter, an interline device may be the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection, or where cost, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. Frame-transfer is a middle compromise that was more common before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light. Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers. Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels. Frame transfer CCD. The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness. The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as rolling shutter effect, making fast moving objects appear distorted. In addition, the CCD cannot be used to collect light while it is being read out. A faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures. The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed. Intensified charge-coupled device. An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD. An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called "gating" and therefore ICCDs are also called gateable CCD cameras. Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds. ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around . This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application. ICCDs are used in night vision devices and in various scientific applications. Electron-multiplying CCD. An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small ("P" < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (formula_1), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the in 1973 by George E. Smith/Bell Telephone Laboratories. EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the "exact" gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. This effect is referred to as the "excess noise factor" (ENF). However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation: formula_2 where "P" is the probability of getting "n" output electrons given "m" input electrons and a total mean multiplication register gain of "g". For very large numbers of input electrons, this complex distribution function converges towards a Gaussian. Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of . This cooling system adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues. The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs. In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device. Use in astronomy. Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications. Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light dark matter searches and neutrino measurements. The Hubble Space Telescope, in particular, has a highly developed series of steps ("data reduction pipeline") to convert the raw CCD data to useful images. CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them. An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky. The Gaia space telescope is another instrument operating in this mode, rotating about its axis at a constant rate of 1 revolution in 6 hours and scanning a 360° by 0.5° strip on the sky during this time; a star traverses the entire focal plane in about 40 seconds (effective exposure time). In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers. Color cameras. Digital color cameras, including the digital color cameras in smartphones, generally use an integral color image sensor, which has a color filter array fabricated on top of the monochrome pixels of the CCD. The most popular CFA pattern is known as the Bayer filter, which is named for its inventor, Kodak scientist Bryce Bayer. In the Bayer pattern, each square of four pixels has one filtered red, one blue, and two green pixels (the human eye has greater acuity for luminance, which is more heavily weighted in green than in either red or blue). As a result, the luminance information is collected in each row and column using a checkerboard pattern, and the color resolution is lower than the luminance resolution. Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and Bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location. For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled). Sensor sizes. Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes. Blooming. When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking. Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.
6806
11677590
https://en.wikipedia.org/wiki?curid=6806
Computer memory
Computer memory stores information, such as data and programs, for immediate use in the computer. The term "memory" is often synonymous with the terms "RAM," "main memory," or "primary storage." Archaic synonyms for main memory include "core" (for magnetic core memory) and "store". Main memory operates at a high speed compared to mass storage which is slower but less expensive per bit and higher in capacity. Besides storing opened programs and data being actively processed, computer memory serves as a mass storage cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it is not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called "virtual memory". Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM, and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage and static random-access memory (SRAM) used mainly for CPU cache. Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and a multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of "N" bits, making it possible to store 2"N" words in the memory. History. In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes. The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode-ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and was less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for memory recall after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind I computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s. The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. In the same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remained larger and more expensive and did not displace magnetic-core memory until the late 1960s. MOS memory. The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s. The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95. Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992. The term "memory" is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987. Developments in technology and economies of scale have made possible so-called (VLM) computers. Volatility categories. Volatile memory. Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). DRAM dominates for desktop system memory. SRAM is used for CPU cache. SRAM is also found in small embedded systems requiring little memory. SRAM retains its contents as long as the power is connected and may use a simpler interface, but commonly uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. Non-volatile memory. Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as magnetic drum, paper tape and punched cards. Non-volatile memory technologies under development include ferroelectric RAM, programmable metallization cell, Spin-transfer torque magnetic RAM, SONOS, resistive random-access memory, racetrack memory, Nano-RAM, 3D XPoint, and millipede memory. Semi-volatile memory. A third category of memory is "semi-volatile". The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory. For example, some non-volatile memory types experience wear when written. A "worn" cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits. As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold. The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types, such as nvSRAM, which combines SRAM and a non-volatile memory on the same chip, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Another example is battery-backed RAM, which uses an external battery to power the memory device in case of external power loss. If power is off for an extended period of time, the battery may run out, resulting in data loss. Management. Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance. Bugs. Improper management of memory is a common cause of bugs and security vulnerabilities, including the following types: Virtual memory. Virtual memory is a system where physical memory is managed by the operating system typically with assistance from a memory management unit, which is part of many modern CPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on a hard drive (e.g. in a swapfile), functioning as an extension of the cache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known as thrashing. Protected memory. Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system. Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs, debuggers, for example, to insert breakpoints or hooks.
6809
40192293
https://en.wikipedia.org/wiki?curid=6809
CDC (disambiguation)
The Centers for Disease Control and Prevention is the national public health agency of the United States. CDC may also refer to:
6811
1300505514
https://en.wikipedia.org/wiki?curid=6811
Centers for Disease Control and Prevention
The Centers for Disease Control and Prevention (CDC) is the national public health agency of the United States. It is a United States federal agency under the Department of Health and Human Services (HHS), and is headquartered in Atlanta, Georgia. The CDC's current nominee for director is Susan Monarez. She became acting director on January 23, 2025, but stepped down on March 24, 2025 when nominated for the director position. On May 14, 2025, Robert F. Kennedy Jr. stated that lawyer Matthew Buzzelli is acting CDC director. However, the CDC web site does not state the acting director's name. The agency's main goal is the protection of public health and safety through the control and prevention of disease, injury, and disability in the US and worldwide. The CDC focuses national attention on developing and applying disease control and prevention. It especially focuses its attention on infectious disease, food borne pathogens, environmental health, occupational safety and health, health promotion, injury prevention, and educational activities designed to improve the health of United States citizens. The CDC also conducts research and provides information on non-infectious diseases, such as obesity and diabetes, and is a founding member of the International Association of National Public Health Institutes. As part of the announced 2025 HHS reorganization, CDC is planned to be reoriented towards infectious disease programs. It is planned to absorb the Administration for Strategic Preparedness and Response, while the National Institute for Occupational Safety and Health is planned to move into the new Administration for a Healthy America. History. Establishment. The Communicable Disease Center was founded July 1, 1946, as the successor to the World War II Malaria Control in War Areas program of the Office of National Defense Malaria Control Activities. Preceding its founding, organizations with global influence in malaria control were the Malaria Commission of the League of Nations and the Rockefeller Foundation. The Rockefeller Foundation greatly supported malaria control, sought to have the governments take over some of its efforts, and collaborated with the agency. The new agency was a branch of the U.S. Public Health Service and Atlanta was chosen as the location because malaria was endemic in the Southern United States. The agency changed names (see infobox on top) before adopting the name "Communicable Disease Center" in 1946. Offices were located on the sixth floor of the Volunteer Building on Peachtree Street. With a budget at the time of about $1million, 59 percent of its personnel were engaged in mosquito abatement and habitat control with the objective of control and eradication of malaria in the United States (see National Malaria Eradication Program). Among its 369 employees, the main jobs at CDC were originally entomology and engineering. In CDC's initial years, more than six and a half million homes were sprayed, mostly with DDT. In 1946, there were only seven medical officers on duty and an early organization chart was drawn. Under Joseph Walter Mountin, the CDC continued to be an advocate for public health issues and pushed to extend its responsibilities to many other communicable diseases. In 1947, the CDC made a token payment of $10 to Emory University for of land on Clifton Road in DeKalb County, still the home of CDC headquarters as of 2025. CDC employees collected the money to make the purchase. The benefactor behind the "gift" was Robert W. Woodruff, chairman of the board of the Coca-Cola Company. Woodruff had a long-time interest in malaria control, which had been a problem in areas where he went hunting. The same year, the PHS transferred its San Francisco based plague laboratory into the CDC as the Epidemiology Division, and a new Veterinary Diseases Division was established. Growth. In 1951, Chief Epidemiologist Alexander Langmuir's warnings of potential biological warfare during the Korean War spurred the creation of the Epidemic Intelligence Service (EIS) as a two-year postgraduate training program in epidemiology. The success of the EIS program led to the launch of Field Epidemiology Training Programs (FETP) in 1980, training more than 18,000 disease detectives in over 80 countries. In 2020, FETP celebrated the 40th anniversary of the CDC's support for Thailand's Field Epidemiology Training Program. Thailand was the first FETP site created outside of North America and is found in numerous countries, reflecting CDC's influence in promoting this model internationally. The Training Programs in Epidemiology and Public Health Interventions Network (TEPHINET) has graduated 950 students. The mission of the CDC expanded beyond its original focus on malaria to include sexually transmitted diseases when the Venereal Disease Division of the U.S. Public Health Service (PHS) was transferred to the CDC in 1957. Shortly thereafter, Tuberculosis Control was transferred (in 1960) to the CDC from PHS, and then in 1963 the Immunization program was established. It became the National Communicable Disease Center effective July 1, 1967, and the Center for Disease Control on June 24, 1970. At the end of the Public Health Service reorganizations of 1966–1973, it was promoted to being a principal operating agency of PHS. Recent history. It was renamed to the plural Centers for Disease Control effective October 14, 1980, as the modern organization of having multiple constituent centers was established. By 1990, it had four centers formed in the 1980s: the Center for Infectious Diseases, Center for Chronic Disease Prevention and Health Promotion, the Center for Environmental Health and Injury Control, and the Center for Prevention Services; as well as two centers that had been absorbed by CDC from outside: the National Institute for Occupational Safety and Health in 1973, and the National Center for Health Statistics in 1987. An act of the United States Congress appended the words "and Prevention" to the name effective October 27, 1992. However, Congress directed that the initialism CDC be retained because of its name recognition. Since the 1990s, the CDC focus has broadened to include chronic diseases, disabilities, injury control, workplace hazards, environmental health threats, and terrorism preparedness. CDC combats emerging diseases and other health risks, including birth defects, West Nile virus, obesity, avian, swine, and pandemic flu, E. coli, and bioterrorism, to name a few. The organization would also prove to be an important factor in preventing the abuse of penicillin. In May 1994 the CDC admitted having sent samples of communicable diseases to the Iraqi government from 1984 through 1989 which were subsequently repurposed for biological warfare, including Botulinum toxin, West Nile virus, "Yersinia pestis" and Dengue fever virus. On April 21, 2005, then–CDC director Julie Gerberding formally announced the reorganization of CDC to "confront the challenges of 21st-century health threats". She established four coordinating centers. In 2009 the Obama administration re-evaluated this change and ordered them cut as an unnecessary management layer. As of 2013, the CDC's Biosafety Level 4 laboratories were among the few that exist in the world. They included one of only two official repositories of smallpox in the world, with the other one located at the State Research Center of Virology and Biotechnology VECTOR in the Russian Federation. In 2014, the CDC revealed they had discovered several misplaced smallpox samples while their lab workers were "potentially infected" with anthrax. The city of Atlanta annexed the property of the CDC headquarters effective January 1, 2018, as a part of the city's largest annexation within a period of 65 years; the Atlanta City Council had voted to do so the prior December. The CDC and Emory University had requested that the Atlanta city government annex the area, paving the way for a MARTA expansion through the Emory campus, funded by city tax dollars. The headquarters were located in an unincorporated area, statistically in the Druid Hills census-designated place. On August 17, 2022, Walensky said the CDC would make drastic changes in the wake of mistakes during the COVID-19 pandemic. She outlined an overhaul of how the CDC would analyze and share data and how they would communicate information to the general public. In her statement to all CDC employees, she said: "For 75 years, CDC and public health have been preparing for COVID-19, and in our big moment, our performance did not reliably meet expectations." Based on the findings of an internal report, Walensky concluded that "The CDC must refocus itself on public health needs, respond much faster to emergencies and outbreaks of disease, and provide information in a way that ordinary people and state and local health authorities can understand and put to use" (as summarized by the New York Times). Second Trump administration. In January 2025, it was reported that a CDC official had ordered all CDC staff to stop working with the World Health Organization. Around January 31, 2025, several CDC websites, pages, and datasets related to HIV and STI prevention, LGBT and youth health became unavailable for viewing after the agency was ordered to comply with Donald Trump's executive order to remove all material of "diversity, equity, and inclusion" and "gender identity". Shortly thereafter, the CDC ordered its scientists to retract or pause the publication of all research which had been submitted or accepted for publication, but not yet published, which included any of the following banned terms: "Gender, transgender, pregnant person, pregnant people, LGBT, transsexual, non-binary, nonbinary, assigned male at birth, assigned female at birth, biologically male, biologically female." Also in January 2025, due to a pause in communications imposed by the second Trump administration at federal health agencies, publication of the Morbidity and Mortality Weekly Report (MMWR) was halted, the first time that had happened since its inception in 1960. The president of the Infectious Diseases Society of America (IDSA) called the pause in publication a "disaster." Attempts to halt publication had been made by the first Trump administration after MMWR published information about COVID-19 that "conflicted with messaging from the White House." The pause in communications also caused the cancellation of a meeting between the CDC and IDSA about threats to public health regarding the H5N1 influenza virus. On February 14, 2025, around 1,300 CDC employees were laid off by the administration, which included all first-year officers of the Epidemic Intelligence Service. The cuts also terminated 16 of the 24 Laboratory Leadership Service program fellows, a program designed for early-career lab scientists to address laboratory testing shortcomings of the CDC. In the following month, the Trump administration quietly withdrew its CDC director nominee, Dave Weldon, just minutes before his scheduled Senate confirmation hearing on March 13. In April 2025, it was reported that among the reductions is the elimination of the Freedom of Information Act team, the Division of Violence Prevention, laboratories involved in testing for antibiotic resistance, and the team responsible for determining recalls of hazardous infant products. Additional cuts affect the technology branch of the Center for Forecasting and Outbreak Analytics, which includes software engineers and computer scientists supporting the centre established during the COVID-19 pandemic to improve disease outbreak prediction. Organization. The CDC is organized into centers, institutes, and offices (CIOs), with each organizational unit implementing the agency's activities in a particular area of expertise while also providing intra-agency support and resource-sharing for cross-cutting issues and specific health threats. As of the most recent reorganization in February 2023, the CIOs are: The Office of Public Health Preparedness was created during the 2001 anthrax attacks shortly after the terrorist attacks of September 11, 2001. Its purpose was to coordinate among the government the response to a range of biological terrorism threats. Locations. Most CDC centers are located in the Atlanta metropolitan area, where it has three major campuses: A few of the centers are based in or operate other domestic locations: In addition, CDC operates quarantine facilities in 20 cities in the U.S. Budget. The CDC budget for fiscal year 2024 is $11.581billion. Workforce. CDC staff numbered approximately 15,000 personnel (including 6,000 contractors and 840 United States Public Health Service Commissioned Corps officers) in 170 occupations. Eighty percent held bachelor's degrees or higher; almost half had advanced degrees (a master's degree or a doctorate such as a PhD, D.O., or M.D.). Common CDC job titles include engineer, entomologist, epidemiologist, biologist, physician, veterinarian, behavioral scientist, nurse, medical technologist, economist, public health advisor, health communicator, toxicologist, chemist, computer scientist, and statistician. The CDC also operates a number of notable training and fellowship programs, including those indicated below. Epidemic Intelligence Service (EIS). The Epidemic Intelligence Service (EIS) is composed of "boots-on-the-ground disease detectives" who investigate public health problems domestically and globally. When called upon by a governmental body, EIS officers may embark on short-term epidemiological assistance assignments, or "Epi-Aids", to provide technical expertise in containing and investigating disease outbreaks. The EIS program is a model for the international Field Epidemiology Training Program. Public Health Associates Program. The CDC also operates the Public Health Associate Program (PHAP), a two-year paid fellowship for recent college graduates to work in public health agencies all over the United States. PHAP was founded in 2007 and currently has 159 associates in 34 states. Leadership. The director of the CDC is a position that currently requires Senate confirmation. The director serves at the pleasure of the President and may be fired at any time. The CDC director concurrently serves as the Administrator of the Agency for Toxic Substances and Disease Registry. Prior to January 20, 2025, it was a Senior Executive Service position that could be filled either by a career employee, or as a political appointment that does not require Senate confirmation, with the latter method typically being used. The change to requiring Senate confirmation was due to a provision in the Consolidated Appropriations Act, 2023. Twenty directors have served the CDC or its predecessor agencies, including three who have served during the Trump administration (including Anne Schuchat who twice served as acting director) and three who have served during the Carter administration (including one acting director not shown here). Two served under Bill Clinton, but only one under the Nixon to Ford terms. List of directors. The following persons have served as the director of the Centers for Disease Control and Prevention (or chief of the Communicable Disease Center): Areas of focus. Communicable diseases. The CDC's programs address more than 400 diseases, health threats, and conditions that are major causes of death, disease, and disability. The CDC's website has information on various infectious (and noninfectious) diseases, including smallpox, measles, and others. Influenza. The CDC targets the transmission of influenza, including the H1N1 swine flu, and launched websites to educate people about hygiene. Division of Select Agents and Toxins. Within the division are two programs: the Federal Select Agent Program (FSAP) and the Import Permit Program. The FSAP is run jointly with an office within the U.S. Department of Agriculture, regulating agents that can cause disease in humans, animals, and plants. The Import Permit Program regulates the importation of "infectious biological materials." The CDC runs a program that protects the public from rare and dangerous substances such as anthrax and the Ebola virus. The program, called the Federal Select Agent Program, calls for inspections of labs in the U.S. that work with dangerous pathogens. During the 2014 Ebola outbreak in West Africa, the CDC helped coordinate the return of two infected American aid workers for treatment at Emory University Hospital, the home of a special unit to handle highly infectious diseases. As a response to the 2014 Ebola outbreak, Congress passed a Continuing Appropriations Resolution allocating $30,000,000 towards CDC's efforts to fight the virus. Non-communicable diseases. The CDC also works on non-communicable diseases, including chronic diseases caused by obesity, physical inactivity and tobacco-use. The work of the Division for Cancer Prevention and Control, led from 2010 by Lisa C. Richardson, is also within this remit. Antibiotic resistance. The CDC implemented their "National Action Plan for Combating Antibiotic Resistant Bacteria" as a measure against the spread of antibiotic resistance in the United States. This initiative has a budget of $161million and includes the development of the Antibiotic Resistance Lab Network. Global health. Globally, the CDC works with other organizations to address global health challenges and contain disease threats at their source. They work with many international organizations such as the World Health Organization (WHO) as well as ministries of health and other groups on the front lines of outbreaks. The agency maintains staff in more than 60 countries, including some from the U.S. but more from the countries in which they operate. The agency's global divisions include the Division of Global HIV and TB (DGHT), the Division of Parasitic Diseases and Malaria (DPDM), the Division of Global Health Protection (DGHP), and the Global Immunization Division (GID). The CDC has been working with the WHO to implement the "International Health Regulations" (IHR), an agreement between 196 countries to prevent, control, and report on the international spread of disease, through initiatives including the Global Disease Detection Program (GDD). The CDC has also been involved in implementing the U.S. global health initiatives President's Emergency Plan for AIDS Relief (PEPFAR) and President's Malaria Initiative. Travelers' health. The CDC collects and publishes health information for travelers in a comprehensive book, "CDC Health Information for International Travel", which is commonly known as the "yellow book." The book is available online and in print as a new edition every other year and includes current travel health guidelines, vaccine recommendations, and information on specific travel destinations. The CDC also issues travel health notices on its website, consisting of three levels: Vaccine safety. The CDC uses a number of tools to monitor the safety of vaccines. The Vaccine Adverse Event Reporting System (VAERS), a national vaccine safety surveillance program run by CDC and the FDA. "VAERS detects possible safety issues with U.S. vaccines by collecting information about adverse events (possible side effects or health problems) after vaccination." The CDC's Safety Information by Vaccine page provides a list of the latest safety information, side effects, and answers to common questions about CDC recommended vaccines. The Vaccine Safety Datalink (VSD) works with a network of healthcare organizations to share data on vaccine safety and adverse events. The Clinical Immunization Safety Assessment (CISA) project is a network of vaccine experts and health centers that research and assist the CDC in the area of vaccine safety. CDC also runs a program called V-safe, a smartphone web application that allows COVID-19 vaccine recipients to be surveyed in detail about their health in response to getting the shot. CDC Foundation. The CDC Foundation operates independently from CDC as a private, nonprofit 501(c)(3) organization incorporated in the State of Georgia. The creation of the Foundation was authorized by section 399F of the Public Health Service Act to support the mission of CDC in partnership with the private sector, including organizations, foundations, businesses, educational groups, and individuals. From 1995 to 2022, the foundation raised over $1.6 billion and launched more than 1,200 health programs. Bill Cosby formerly served as a member of the foundation's Board of Directors, continuing as an honorary member after completing his term. Activities. The foundation engages in research projects and health programs in more than 160 countries every year, including in focus areas such as cardiovascular disease, cancer, emergency response, and infectious diseases, particularly HIV/AIDS, Ebola, rotavirus, and COVID-19. Criticism. In 2015, "BMJ" associate editor Jeanne Lenzer raised concerns that the CDC's recommendations and publications may be influenced by donations received through the Foundation, which includes pharmaceutical companies. Controversies. Tuskegee study of untreated syphilis in Black men. For 15 years, the CDC had direct oversight over the Tuskegee syphilis experiment. In the study, which lasted from 1932 to 1972, a group of Black men (nearly 400 of whom had syphilis) were studied to learn more about the disease. The disease was left untreated in the men, who had not given their informed consent to serve as research subjects. The Tuskegee Study was initiated in 1932 by the Public Health Service, with the CDC taking over the Tuskegee Health Benefit Program in 1995. Gun control. An area of partisan dispute related to CDC funding is studying firearms effectiveness. Although the CDC was one of the first government agencies to study gun related data, in 1996 the Dickey Amendment, passed with the support of the National Rifle Association of America, states "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control". Advocates for gun control oppose the amendment and have tried to overturn it. Looking at the history of the passage of the Dickey Amendment, in 1992, Mark L. Rosenberg and five CDC colleagues founded the CDC's National Center for Injury Prevention and Control, with an annual budget of approximately $260,000. They focused on "identifying causes of firearm deaths, and methods to prevent them". Their first report, published in the "New England Journal of Medicine" in 1993 entitled "Guns are a Risk Factor for Homicide in the Home", reported "mere presence of a gun in a home increased the risk of a firearm-related death by 2.7 percent, and suicide fivefolda "huge" increase." In response, the NRA launched a "campaign to shut down the Injury Center." Two conservative pro-gun groups, Doctors for Responsible Gun Ownership and Doctors for Integrity and Policy Research joined the pro-gun effort, and, by 1995, politicians also supported the pro-gun initiative. In 1996, Jay Dickey (R) Arkansas introduced the Dickey Amendment statement stating "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control" as a rider in the 1996 appropriations bill. In 1997, "Congress re-directed all of the money for gun research to the study of traumatic brain injury." David Satcher, CDC head 1993–98 advocated for firearms research. In 2016 over a dozen "public health insiders, including current and former CDC senior leaders" told "The Trace" interviewers that CDC senior leaders took a cautious stance in their interpretation of the Dickey Amendment and that they could do more but were afraid of political and personal retribution. In 2013, the American Medical Association, the American Psychological Association, and the American Academy of Pediatrics sent a letter to the leaders of the Senate Appropriations Committee asking them "to support at least $10million within the Centers for Disease Control and Prevention (CDC) in FY 2014 along with sufficient new taxes at the National Institutes of Health to support research into the causes and prevention of violence. Furthermore, we urge Members to oppose any efforts to reduce, eliminate, or condition CDC funding related to violence prevention research." Congress maintained the ban in subsequent budgets. Ebola. In October 2014, the CDC gave a nurse with a fever who was later diagnosed with Ebola permission to board a commercial flight to Cleveland. COVID-19. The CDC has been widely criticized for its handling of the COVID-19 pandemic. In 2022, CDC director Rochelle Walensky acknowledged "some pretty dramatic, pretty public mistakes, from testing to data to communications", based on the findings of an internal examination. The first confirmed case of COVID-19 was discovered in the U.S. on January 20, 2020. However, widespread COVID-19 testing in the United States was effectively stalled until February 28, when federal officials revised a faulty CDC test, and days afterward, when the Food and Drug Administration began loosening rules that had restricted other labs from developing tests. In February 2020, as the CDC's early coronavirus test malfunctioned nationwide, CDC Director Robert R. Redfield reassured fellow officials on the White House Coronavirus Task Force that the problem would be quickly solved, according to White House officials. It took about three weeks to sort out the failed test kits, which may have been contaminated during their processing in a CDC lab. Later investigations by the FDA and the Department of Health and Human Services found that the CDC had violated its own protocols in developing its tests. In November 2020, "NPR" reported that an internal review document they obtained revealed that the CDC was aware that the first batch of tests which were issued in early January had a chance of being wrong 33 percent of the time, but they released them anyway. In May 2020, "The Atlantic" reported that the CDC was conflating the results of two different types of coronavirus tests – tests that diagnose current coronavirus infections, and tests that measure whether someone has ever had the virus. The magazine said this distorted several important metrics, provided the country with an inaccurate picture of the state of the pandemic, and overstated the country's testing ability. In July 2020, the Trump administration ordered hospitals to bypass the CDC and instead send all COVID-19 patient information to a database at the Department of Health and Human Services. Some health experts opposed the order and warned that the data might become politicized or withheld from the public. On July 15, the CDC alarmed health care groups by temporarily removing COVID-19 dashboards from its website. It restored the data a day later. In August 2020, the CDC recommended that people showing no COVID-19 symptoms do not need testing. The new guidelines alarmed many public health experts. The guidelines were crafted by the White House Coronavirus Task Force without the sign-off of Anthony Fauci of the NIH. Objections by other experts at the CDC went unheard. Officials said that a CDC document in July arguing for "the importance of reopening schools" was also crafted outside the CDC. On August 16, the chief of staff, Kyle McGowan, and his deputy, Amanda Campbell, resigned from the agency. The testing guidelines were reversed on September 18, 2020, after public controversy. In September 2020, the CDC drafted an order requiring masks on all public transportation in the United States, but the White House Coronavirus Task Force blocked the order, refusing to discuss it, according to two federal health officials. In October 2020, it was disclosed that White House advisers had repeatedly altered the writings of CDC scientists about COVID-19, including recommendations on church choirs, social distancing in bars and restaurants, and summaries of public-health reports. In the lead up to 2020 Thanksgiving, the CDC advised Americans not to travel for the holiday saying, "It's not a requirement. It's a recommendation for the American public to consider." The White House coronavirus task force had its first public briefing in months on that date but travel was not mentioned. The New York Times later concluded that the CDC's decisions to "ben[d] to political pressure from the Trump White House to alter key public health guidance or withhold it from the public [...] cost it a measure of public trust that experts say it still has not recaptured" as of 2022. In May 2021, following criticism by scientists, the CDC updated its COVID-19 guidance to acknowledge airborne transmission of COVID-19, after having previously claimed that the majority of infections occurred via "close contact, not airborne transmission". In December 2021, following a request from the CEO of Delta Air Lines, CDC shortened its recommended isolation period for asymptomatic individuals infected with COVID-19 from 10 days to five. Until 2022, the CDC withheld critical data about COVID-19 vaccine boosters, hospitalizations and wastewater data. On June 10, 2022, the Biden Administration ordered the CDC to remove the COVID-19 testing requirement for air travelers entering the United States. Controversy over the Morbidity and Mortality Weekly Report. During the pandemic, the CDC Morbidity and Mortality Weekly Report (MMWR) came under pressure from political appointees at the Department of Health and Human Services (HHS) to modify its reporting so as not to conflict with what Trump was saying about the pandemic. Starting in June 2020, Michael Caputo, the HHS assistant secretary for public affairs, and his chief advisor Paul Alexander tried to delay, suppress, change, and retroactively edit MMR releases about the effectiveness of potential treatments for COVID-19, the transmissibility of the virus, and other issues where the president had taken a public stance. Alexander tried unsuccessfully to get personal approval of all issues of MMWR before they went out. Caputo claimed this oversight was necessary because MMWR reports were being tainted by "political content"; he demanded to know the political leanings of the scientists who reported that hydroxychloroquine had little benefit as a treatment while Trump was saying the opposite. In emails Alexander accused CDC scientists of attempting to "hurt the president" and writing "hit pieces on the administration". In October 2020, emails obtained by "Politico" showed that Alexander requested multiple alterations in a report. The published alterations included a title being changed from "Children, Adolescents, and Young Adults" to "Persons." One current and two former CDC officials who reviewed the email exchanges said they were troubled by the "intervention to alter scientific reports viewed as untouchable prior to the Trump administration" that "appeared to minimize the risks of the coronavirus to children by making the report's focus on children less clear." Eroding trust in the CDC as a result of COVID-19 controversies. A poll conducted in September 2020 found that nearly 8 in 10 Americans trusted the CDC, a decrease from 87 percent in April 2020. Another poll showed an even larger drop in trust with the results dropping 16 percentage points. By January 2022, according to an NBC News poll, only 44% of Americans trusted the CDC compared to 69% at the beginning of the pandemic. As the trustworthiness eroded, so too did the information it disseminates. The diminishing level of trust in the CDC and the information releases also incited "vaccine hesitancy" with the result that "just 53 percent of Americans said they would be somewhat or extremely likely to get a vaccine." In September 2020, amid the accusations and the faltering image of the CDC, the agency's leadership was called into question. Former acting director at the CDC, Richard Besser, said of Redfield that "I find it concerning that the CDC director has not been outspoken when there have been instances of clear political interference in the interpretation of science." In addition, Mark Rosenberg, the first director of CDC's National Center for Injury Prevention and Control, also questioned Redfield's leadership and his lack of defense of the science. Historically, the CDC has not been a political agency; however, the COVID-19 pandemic, and specifically the Trump administration's handling of the pandemic, resulted in a "dangerous shift" according to a previous CDC director and others. Four previous directors claim that the agency's voice was "muted for political reasons." Politicization of the agency has continued into the Biden administration as COVID-19 guidance is contradicted by State guidance and the agency is criticized as "CDC's credibility is eroding". In 2021, the CDC, then under the leadership of the Biden administration, received criticism for its mixed messaging surrounding COVID-19 vaccines, mask-wearing guidance, and the state of the pandemic. Gender censorship. On February 1, 2025, the CDC ordered its scientists to retract any not yet published research they had produced which included any of the following banned terms: "Gender, transgender, pregnant person, pregnant people, LGBT, transsexual, non-binary, nonbinary, assigned male at birth, assigned female at birth, biologically male, biologically female". Larry Gostin, director of the World Health Organization Center on Global Health Law, said that the directive amounted to censorship of not only government employees, but private citizens as well. For example, if the lead author of a submitted paper works for the CDC and withdraws their name from the submission, that kills the submission even if coauthors who are private scientists remain on it. Other censored topics include DEI, climate change, and HIV. Following extensive public backlash, some, but not all, of the removed pages were reinstated. The CDC's censorship led to many researchers and journalists to preserve databases themselves, with many removed articles being uploaded to archival sites such as the Internet Archive. On February 4, Doctors for America filed a federal lawsuit against the CDC, Food and Drug Administration, and Department of Health and Human Services, asking the removed websites to be put back online. On February 11, a judge ordered removed pages to be restored temporarily while the suit is being considered, citing doctors who said the removed materials were "vital for real-time clinical decision-making". Popular culture. Zombie Apocalypse campaign. On May 16, 2011, the Centers for Disease Control and Prevention's blog what to do to prepare for a zombie invasion. While the article did not claim that such a scenario was possible, it did use the popular culture appeal as a means of urging citizens to prepare for all potential hazards, such as earthquakes, tornadoes, and floods. According to David Daigle, the associate director for communications, public health preparedness and response, the idea arose when his team was discussing their upcoming hurricane-information campaign and Daigle mused that "we say pretty much the same things every year, in the same way, and I just wonder how many people are paying attention." A social-media employee mentioned that the subject of zombies had come up a lot on Twitter when she had been tweeting about the Fukushima Daiichi nuclear disaster and radiation. The team realized that a campaign like this would most likely reach a different audience from the one that normally pays attention to hurricane-preparedness warnings and went to work on the zombie campaign, launching it right before hurricane season began. "The whole idea was, if you're prepared for a zombie apocalypse, you're prepared for pretty much anything," said Daigle. Once the blog article was posted, the CDC announced an open contest for YouTube submissions of the most creative and effective videos covering preparedness for a zombie apocalypse (or apocalypse of any kind), to be judged by the "CDC Zombie Task Force". Submissions were open until October 11, 2011. They also released a zombie-themed graphic novella available on their website. Zombie-themed educational materials for teachers are available on the site.
6813
41995757
https://en.wikipedia.org/wiki?curid=6813
Chandrasekhar limit
The Chandrasekhar limit () is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about (). The limit was named after Subrahmanyan Chandrasekhar. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Physics. Normal stars fuse gravitationally compressed hydrogen into helium, generating vast amounts of heat. As the hydrogen is consumed, the stars' core compresses further allowing the helium and heavier nuclei to fuse ultimately resulting in stable iron nuclei, a process called stellar evolution. The next step depends upon the mass of the star. Stars below the Chandrasekhar limit become stable white dwarf stars, remaining that way throughout the rest of the history of the universe (assuming the absence of external forces). Stars above the limit can become neutron stars or black holes. The Chandrasekhar limit is a consequence of competition between gravity and electron degeneracy pressure. Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form , where is the pressure, is the mass density, and is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass. As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form . This yields a polytrope of index 3, which has a total mass, , depending only on . For a fully relativistic treatment, the equation of state used interpolates between the equations for small and for large . When this is done, the model radius still decreases with mass, but becomes zero at . This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses. Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhar gives the following expression, based on the equation of state for an ideal Fermi gas: formula_1 where: As is the Planck mass, the limit is of the order of formula_2 The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density. A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation. History. In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately . In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for ). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community. A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. The existence of a related limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was first established in separate papers published by Wilhelm Anderson and E. C. Stoner for a uniform density star in 1929. Eric G. Blackman wrote that the roles of Stoner and Anderson in the discovery of mass limits were overlooked when Freeman Dyson wrote a biography of Chandrasekhar. Michael Nauenberg claims that Stoner established the mass limit first. The priority dispute has also been discussed at length by Virginia Trimble who writes that: "Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be." This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass. Chandrasekhar–Eddington dispute. In 1935, the 25 year old Chandrasekhar presented his work on the limit at a scientific conference. It was immediately opposed by the established British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After the talk by Chandrasekhar on the limit, Eddington remarked: Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law universally applicable, even for large . Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar. Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of "Empire of the Stars", Arthur I. Miller's biography of Chandrasekhar. In Miller's view: However, Chandrasekhar chose to move on, leaving the study of stellar structure to focus on stellar dynamics. In 1983 in recognition for his work, Chandrasekhar shared a Nobel prize "for his theoretical studies of the physical processes of importance to the structure and evolution of the stars" with William Alfred Fowler. Applications. The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse. If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II. Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova. A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, is approximately −19.3, with a standard deviation of no more than 0.3. A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy. Super-Chandrasekhar mass supernovas. In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that had grown to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Another way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles. Since the observation of the Champagne Supernova in 2003, several more type Ia supernovae have been observed that are very bright, and thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if, and SN 2009dc. These super-Chandrasekhar mass white dwarfs are believed to have had masses up to 2.4–2.8 solar masses. Tolman–Oppenheimer–Volkoff limit. Stars sufficiently massive to pass the Chandrasekhar limit provided by electron degeneracy pressure do not become white dwarf stars. Instead they explode as supernovae. If the final mass is below the Tolman–Oppenheimer–Volkoff limit, then neutron degeneracy pressure contributes to the balance against gravity and the result will be a neutron star; but if the total mass is above the Tolman-Oppenheimer-Volkhoff limit, the result will be a black hole.
6814
1242992925
https://en.wikipedia.org/wiki?curid=6814
Congregational polity
Congregational polity, or congregationalist polity, often known as congregationalism, is a system of ecclesiastical polity in which every local church (congregation) is independent, ecclesiastically sovereign, or "autonomous". Its first articulation in writing is the Cambridge Platform of 1648 in New England. Major Protestant Christian traditions that employ congregationalism include Baptist churches, the Congregational Methodist Church, and Congregational churches known by the "Congregationalist" name and having descended from the Independent Reformed wing of the Anglo-American Puritan movement of the 17th century. More recent generations have witnessed a growing number of nondenominational churches, which are often congregationalist in their governance. Although autonomous, like minded congregations may enter into voluntary associations with other congregations, sometimes called conventions, denominations, or associations. Congregationalism is distinguished from episcopal polity which is governance by a hierarchy of bishops, and is also distinct from presbyterian polity in which higher assemblies of congregational representatives can exercise considerable authority over individual congregations. Congregationalism is not limited only to organization of Christian church congregations. The principles of congregationalism have been inherited by the Unitarian Universalist Association and the Canadian Unitarian Council. Basic form. The term "congregational polity" describes a form of church governance that is based on the local congregation. Each local congregation is independent and self-supporting, governed by its own members. Some band into loose voluntary associations with other congregations that share similar beliefs (e.g., the Willow Creek Association and the Unitarian Universalist Association). Others join "conventions", such as the Southern Baptist Convention, the National Baptist Convention or the American Baptist Churches USA (formerly the Northern Baptist Convention). These conventions generally provide stronger ties between congregations, including some doctrinal direction and pooling of financial resources. Congregations that belong to associations and conventions are still independently governed. Most non-denominational churches are organized along congregationalist lines. Many do not see these voluntary associations as "denominations", because they "believe that there is no church other than the local church, and denominations are in variance to Scripture." Denominational families. These Christian traditions use forms of congregational polity. Congregational churches. Congregationalism is a Protestant tradition with roots in the Puritan and Independent movements. In congregational government, the covenanted congregation exists prior to its officers, and as such the members are equipped to call and dismiss their ministers without oversight from any higher ecclesiastical body. Their churches ordinarily have at least one pastor, but may also install ruling elders. Statements of polity in the congregational tradition called "platforms". These include the Savoy Confession's platform, the Cambridge Platform, and the Saybrook Platform. Denominations in the congregational tradition include the UCC, NACCC, CCCC, and EFCC. Denominations in the tradition support but do not govern their constituent members. Baptist churches. Most Baptists hold that no denominational or ecclesiastical organization has inherent authority over an individual Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include the Episcopal Baptists that have an episcopal system. Independent Baptist churches have no formal organizational structure above the level of the local congregation. More generally among Baptists, a variety of parachurch agencies and evangelical educational institutions may be supported generously or not at all, depending entirely upon the local congregation's customs and predilections. Usually doctrinal conformity is held as a first consideration when a church makes a decision to grant or decline financial contributions to such agencies, which are legally external and separate from the congregations they serve. These practices also find currency among non-denominational fundamentalist or charismatic fellowships, many of which derive from Baptist origins, culturally if not theologically. Most Southern Baptist and National Baptist congregations, by contrast, generally relate more closely to external groups such as mission agencies and educational institutions than do those of independent persuasion. However, they adhere to a very similar ecclesiology, refusing to permit outside control or oversight of the affairs of the local church. Churches of Christ. Ecclesiastical government is congregational rather than denominational. Churches of Christ purposefully have no central headquarters, councils, or other organizational structure above the local church level. Rather, the independent congregations are a network with each congregation participating at its own discretion in various means of service and fellowship with other congregations. Churches of Christ are linked by their shared commitment to restoration principles. Congregations are generally overseen by a plurality of elders (also known in some congregations as shepherds, bishops, or pastors) who are sometimes assisted in the administration of various works by deacons. Elders are generally seen as responsible for the spiritual welfare of the congregation, while deacons are seen as responsible for the non-spiritual needs of the church. Deacons serve under the supervision of the elders, and are often assigned to direct specific ministries. Successful service as a deacon is often seen as preparation for the eldership. Elders and deacons are chosen by the congregation based on the qualifications found in Timothy 3 and Titus 1. Congregations look for elders who have a mature enough understanding of scripture to enable them to supervise the minister and to teach, as well as to perform governance functions. In lieu of willing men who meet these qualifications, congregations are sometimes overseen by an unelected committee of the congregation's men. While the early Restoration Movement had a tradition of itinerant preachers rather than "located Preachers", during the 20th century a long-term, formally trained congregational minister became the norm among Churches of Christ. Ministers are understood to serve under the oversight of the elders. While the presence of a long-term professional minister has sometimes created "significant "de facto" ministerial authority" and led to conflict between the minister and the elders, the eldership has remained the "ultimate locus of authority in the congregation". There is a small group within the Churches of Christ which oppose a single preacher and, instead, rotate preaching duties among qualified elders (this group tends to overlap with groups which oppose Sunday School and also have only one cup to serve the Lord's Supper). Churches of Christ hold to the priesthood of all believers. No special titles are used for preachers or ministers that would identify them as clergy. Churches of Christ emphasize that there is no distinction between "clergy" and "laity" and that every member has a gift and a role to play in accomplishing the work of the church. Congregational Methodist Church. Methodists who disagreed with the episcopal polity of the Methodist Episcopal Church, South left their mother church to form the Congregational Methodist Church, which retains Wesleyan-Arminian theology but adopts congregationalist polity as a distinctive.
6816
415269
https://en.wikipedia.org/wiki?curid=6816
Cavalry
Historically, cavalry (from the French word "cavalerie", itself derived from "cheval" meaning "horse") are groups of soldiers or warriors who fight mounted on horseback. Until the 20th century, cavalry were the most mobile of the combat arms, operating as light cavalry in the roles of reconnaissance, screening, and skirmishing, or as heavy cavalry for decisive economy of force and shock attacks. An individual soldier in the cavalry is known by a number of designations depending on era and tactics, such as a cavalryman, horseman, trooper, cataphract, knight, drabant, hussar, uhlan, mamluk, cuirassier, lancer, dragoon, samurai or horse archer. The designation of "cavalry" was not usually given to any military forces that used other animals or platforms for mounts, such as chariots, camels or elephants. Infantry who moved on horseback, but dismounted to fight on foot, were known in the early 17th to the early 18th century as "dragoons", a class of mounted infantry which in most armies later evolved into standard cavalry while retaining their historic designation. Cavalry had the advantage of improved mobility, and a soldier fighting from horseback also had the advantages of greater height, speed, and inertial mass over an opponent on foot. Another element of horse mounted warfare is the psychological impact a mounted soldier can inflict on an opponent. The speed, mobility, and shock value of cavalry was greatly valued and exploited in warfare during the Ancient and Medieval eras. Some hosts were mostly cavalry, particularly in nomadic societies of Asia, notably the Huns of Attila and the later Mongol armies. In Europe, cavalry became increasingly armoured (heavy), and eventually evolving into the mounted knights of the medieval period. During the 17th century, cavalry in Europe discarded most of its armor, which was ineffective against the muskets and cannons that were coming into common use, and by the mid-18th century armor had mainly fallen into obsolescence, although some regiments retained a small thickened cuirass that offered protection against lances, sabres, and bayonets; including some protection against a shot from distance. In the interwar period many cavalry units were converted into motorized infantry and mechanized infantry units, or reformed as tank troops. The cavalry tank or cruiser tank was one designed with a speed and purpose beyond that of infantry tanks and would subsequently develop into the main battle tank. Nonetheless, some cavalry still served during World War II (notably in the Red Army, the Mongolian People's Army, the Royal Italian Army, the Royal Hungarian Army, the Romanian Army, the Polish Land Forces, and German light reconnaissance units within the Waffen SS). Most cavalry units that are horse-mounted in modern armies serve in purely ceremonial roles, or as mounted infantry in difficult terrain such as mountains or heavily forested areas. Modern usage of the term generally refers to units performing the role of reconnaissance, surveillance, and target acquisition (analogous to historical light cavalry) or main battle tank units (analogous to historical heavy cavalry). Role. Historically, cavalry was divided into light cavalry and heavy cavalry. The differences were their roles in combat, the size of their mounts, and how much armor was worn by the mount and rider. Heavy cavalry, such as Byzantine cataphracts and knights of the Early Middle Ages in Europe, were used as shock troops, charging the main body of the enemy at the height of a battle; in many cases their actions decided the outcome of the battle, hence the later term "battle cavalry". Light cavalry, such as horse archers, hussars, and Cossack cavalry, were assigned all the numerous roles that were ill-suited to more narrowly-focused heavy forces. This includes scouting, deterring enemy scouts, foraging, raiding, skirmishing, pursuit of retreating enemy forces, screening of retreating friendly forces, linking separated friendly forces, and countering enemy light forces in all these same roles. Light and heavy cavalry roles continued through early modern warfare, but armor was reduced, with light cavalry mostly unarmored. Yet many cavalry units still retained cuirasses and helmets for their protective value against sword and bayonet strikes, and the morale boost these provide to the wearers, despite the actual armour giving little protection from firearms. By this time the main difference between light and heavy cavalry was in their training and weight; the former was regarded as best suited for harassment and reconnaissance, while the latter was considered best for close-order charges. By the start of the 20th century, as total battlefield firepower increased, cavalry increasingly tended to become dragoons in practice, riding mounted between battles, but dismounting to fight as infantry, even though retaining unit names that reflected their older cavalry roles. Military conservatism was however strong in most continental cavalry during peacetime and in these dismounted action continued to be regarded as a secondary function until the outbreak of World War I in 1914. With the development of armored warfare, the heavy cavalry role of decisive shock troops had been taken over by armored units employing medium and heavy tanks, and later main battle tanks. Despite horse-borne cavalry becoming obsolete, the term "cavalry" is still used, referring in modern times to units continuing to fulfill the traditional light cavalry roles, employing fast armored cars, light tanks, and infantry fighting vehicles instead of horses, while air cavalry employs helicopters. Early history. Origins. Before the Iron Age, the role of cavalry on the battlefield was largely performed by light chariots. The chariot originated with the Sintashta-Petrovka culture in Central Asia and spread by nomadic or semi-nomadic Indo-Iranians. The chariot was quickly adopted by settled peoples both as a military technology and an object of ceremonial status, especially by the pharaohs of the New Kingdom of Egypt from 1550 BC as well as the Assyrian army and Babylonian royalty. The power of mobility given by mounted units was recognized early on, but was offset by the difficulty of raising large forces and by the inability of horses (then mostly small) to carry heavy armor. Nonetheless, there are indications that, from the 15th century BC onwards, horseback riding was practiced amongst the military elites of the great states of the ancient Near East, most notably those in Egypt, Assyria, the Hittite Empire, and Mycenaean Greece. Cavalry techniques, and the rise of true cavalry, were an innovation of equestrian nomads of the Eurasian Steppe and pastoralist tribes such as the Iranic Parthians and Sarmatians. Together with a core of armoured lancers, these were predominantly horse archers using the Parthian shot tactic. The photograph straight above shows Assyrian cavalry from reliefs of 865–860 BC. At this time, the men had no spurs, saddles, saddle cloths, or stirrups. Fighting from the back of a horse was much more difficult than mere riding. The cavalry acted in pairs; the reins of the mounted archer were controlled by his neighbour's hand. Even at this early time, cavalry used swords, shields, spears, and bows. The sculpture implies two types of cavalry, but this might be a simplification by the artist. Later images of Assyrian cavalry show saddle cloths as primitive saddles, allowing each archer to control his own horse. As early as 490 BC a breed of large horses was bred in the Nisaean plain in Media to carry men with increasing amounts of armour (Herodotus 7,40 & 9,20), but large horses were still very exceptional at this time. By the fourth century BC the Chinese during the Warring States period (403–221 BC) began to use cavalry against rival states, and by 331 BC when Alexander the Great defeated the Persians the use of chariots in battle was obsolete in most nations; despite a few ineffective attempts to revive scythed chariots. The last recorded use of chariots as a shock force in continental Europe was during the Battle of Telamon in 225 BC. However, chariots remained in use for ceremonial purposes such as carrying the victorious general in a Roman triumph, or for racing. Outside of mainland Europe, the southern Britons met Julius Caesar with chariots in 55 and 54 BC, but by the time of the Roman conquest of Britain a century later chariots were obsolete, even in Britannia. The last mention of chariot use in Britain was by the Caledonians at the Mons Graupius, in 84 AD. Ancient Greece: city-states, Thebes, Thessaly and Macedonia. During the classical Greek period, cavalry was usually limited to citizens who could afford expensive war-horses. Three types of cavalry became common: light cavalry - who armed with javelins could harass and skirmish; heavy cavalry - using lances and having the ability to close in on their opponents; and finally those whose equipment allowed them to fight either on horseback or foot. The role of horsemen did, however, remain secondary to that of the hoplites or heavy infantry who comprised the main strength of the citizen levies of the various city states. Cavalry played a relatively minor role in ancient Greek city-states, with conflicts decided by massed armored infantry. However, Thebes produced Pelopidas, their first great cavalry commander, whose tactics and skills were absorbed by Philip II of Macedon when Philip was a guest-hostage in Thebes. Thessaly was widely known for producing competent cavalrymen, and later experiences in wars both with and against the Persians taught the Greeks the value of cavalry in skirmishing and pursuit. The Athenian author and soldier Xenophon in particular advocated the creation of a small but well-trained cavalry force; to that end, he wrote several manuals on horsemanship and cavalry operations. The Macedonian kingdom in the north, on the other hand, developed a strong cavalry force that culminated in the "hetairoi" (Companion cavalry) of Philip II of Macedon and Alexander the Great. In addition to these heavy cavalry, the Macedonian army also employed lighter horsemen called prodromoi for scouting and screening, as well as the Macedonian pike phalanx and various kinds of light infantry. There were also the "Ippiko" (or "Horserider"), Greek "heavy" cavalry, armed with kontos (or cavalry lance), and sword. These wore leather armour or mail plus a helmet. They were medium rather than heavy cavalry, meaning that they were better suited to be scouts, skirmishers, and pursuers rather than front line fighters. The effectiveness of this combination of cavalry and infantry helped to break enemy lines and was most dramatically demonstrated in Alexander's conquests of Persia, Bactria, and northwestern India. Roman Republic and early Empire. The cavalry in the early Roman Republic remained the preserve of the wealthy landed class known as the "equites"—men who could afford the expense of maintaining a horse in addition to arms and armor heavier than those of the common legions. Horses were provided by the Republic and could be withdrawn if neglected or misused, together with the status of being a cavalryman. As the class grew to be more of a social elite instead of a functional property-based military grouping, the Romans began to employ Italian socii for filling the ranks of their cavalry. The weakness of Roman cavalry was demonstrated by Hannibal Barca during the Second Punic War where he used his superior mounted forces to win several battles. The most notable of these was the Battle of Cannae, where he inflicted a catastrophic defeat on the Romans. At about the same time the Romans began to recruit foreign auxiliary cavalry from among Gauls, Iberians, and Numidians, the last being highly valued as mounted skirmishers and scouts (see Numidian cavalry). Julius Caesar had a high opinion of his escort of Germanic mixed cavalry, giving rise to the "Cohortes Equitatae". Early emperors maintained an ala of Batavian cavalry as their personal bodyguards until the unit was dismissed by Galba after the Batavian Rebellion. For the most part, Roman cavalry during the early Republic functioned as an adjunct to the legionary infantry and formed only one-fifth of the standing force comprising a consular army. Except in times of major mobilisation about 1,800 horsemen were maintained, with three hundred attached to each legion. The relatively low ratio of horsemen to infantry does not mean that the utility of cavalry should be underestimated, as its strategic role in scouting, skirmishing, and outpost duties was crucial to the Romans' capability to conduct operations over long distances in hostile or unfamiliar territory. On some occasions Roman cavalry also proved its ability to strike a decisive tactical blow against a weakened or unprepared enemy, such as the final charge at the Battle of Aquilonia. After defeats such as the Battle of Carrhae, the Romans learned the importance of large cavalry formations from the Parthians. At the same time heavy spears and shields modelled on those favoured by the horsemen of the Greek city-states were adopted to replace the lighter weaponry of early Rome. These improvements in tactics and equipment reflected those of a thousand years earlier when the first Iranians to reach the Iranian Plateau forced the Assyrians to undertake similar reform. Nonetheless, the Romans would continue to rely mainly on their heavy infantry supported by auxiliary cavalry. Late Roman Empire and the Migration Period. In the army of the late Roman Empire, cavalry played an increasingly important role. The Spatha, the classical sword throughout most of the 1st millennium was adopted as the standard model for the Empire's cavalry forces. By the 6th century these had evolved into lengthy straight weapons influenced by Persian and other eastern patterns. Other specialist weapons during this period included javelins, long reaching lancers, axes and maces. The most widespread employment of heavy cavalry at this time was found in the forces of the Iranian empires, the Parthians and their Persian Sasanian successors. Both, but especially the former, were famed for the cataphract (fully armored cavalry armed with lances) even though the majority of their forces consisted of lighter horse archers. The West first encountered this eastern heavy cavalry during the Hellenistic period with further intensive contacts during the eight centuries of the Roman–Persian Wars. At first the Parthians' mobility greatly confounded the Romans, whose armoured close-order infantry proved unable to match the speed of the Parthians. However, later the Romans would successfully adapt such heavy armor and cavalry tactics by creating their own units of cataphracts and "clibanarii". The decline of the Roman infrastructure made it more difficult to field large infantry forces, and during the 4th and 5th centuries cavalry began to take a more dominant role on the European battlefield, also in part made possible by the appearance of new, larger breeds of horses. The replacement of the Roman saddle by variants on the Scythian model, with pommel and cantle, was also a significant factor as was the adoption of stirrups and the concomitant increase in stability of the rider's seat. Armored cataphracts began to be deployed in Eastern Europe and the Near East, following the precedents established by Persian forces, as the main striking force of the armies in contrast to the earlier roles of cavalry as scouts, raiders, and outflankers. The late-Roman cavalry tradition of organized units in a standing army differed fundamentally from the nobility of the Germanic invaders—individual warriors who could afford to provide their own horses and equipment. While there was no direct linkage with these predecessors the early medieval knight also developed as a member of a social and martial elite, able to meet the considerable expenses required by his role from grants of land and other incomes. Asia. Central Asia. Xiongnu, Tujue, Avars, Kipchaks, Khitans, Mongols, Don Cossacks and the various Turkic peoples are also examples of the horse-mounted groups that managed to gain substantial successes in military conflicts with settled agrarian and urban societies, due to their strategic and tactical mobility. As European states began to assume the character of bureaucratic nation-states supporting professional standing armies, recruitment of these mounted warriors was undertaken in order to fill the strategic roles of scouts and raiders. The best known instance of the continued employment of mounted tribal auxiliaries were the Cossack cavalry regiments of the Russian Empire. In Eastern Europe, and out onto the steppes, cavalry remained important much longer and dominated the scene of warfare until the early 17th century and even beyond, as the strategic mobility of cavalry was crucial for the semi-nomadic pastoralist lives that many steppe cultures led. Tibetans also had a tradition of cavalry warfare, in several military engagements with the Chinese Tang dynasty (618–907 AD). East Asia. China. Further east, the military history of China, specifically northern China, held a long tradition of intense military exchange between Han Chinese infantry forces of the settled dynastic empires and the mounted nomads or "barbarians" of the north. The naval history of China was centered more to the south, where mountains, rivers, and large lakes necessitated the employment of a large and well-kept navy. In 307 BC, King Wuling of Zhao, the ruler of the former state of Jin, ordered his commanders and troops to adopt the trousers of the nomads as well as practice the nomads' form of mounted archery to hone their new cavalry skills. The adoption of massed cavalry in China also broke the tradition of the chariot-riding Chinese aristocracy in battle, which had been in use since the ancient Shang dynasty (–1050 BC). By this time large Chinese infantry-based armies of 100,000 to 200,000 troops were now buttressed with several hundred thousand mounted cavalry in support or as an effective striking force. The handheld pistol-and-trigger crossbow was invented in China in the fourth century BC; it was written by the Song dynasty scholars Zeng Gongliang, Ding Du, and Yang Weide in their book "Wujing Zongyao" (1044 AD) that massed missile fire by crossbowmen was the most effective defense against enemy cavalry charges. On many occasions the Chinese studied nomadic cavalry tactics and applied the lessons in creating their own potent cavalry forces, while in others they simply recruited the tribal horsemen wholesale into their armies; and in yet other cases nomadic empires proved eager to enlist Chinese infantry and engineering, as in the case of the Mongol Empire and its sinicized part, the Yuan dynasty (1279–1368). The Chinese recognized early on during the Han dynasty (202 BC – 220 AD) that they were at a disadvantage in lacking the number of horses the northern nomadic peoples mustered in their armies. Emperor Wu of Han (r 141–87 BC) went to war with the Dayuan for this reason, since the Dayuan were hoarding a massive amount of tall, strong, Central Asian bred horses in the Hellenized–Greek region of Fergana (established slightly earlier by Alexander the Great). Although experiencing some defeats early on in the campaign, Emperor Wu's war from 104 BC to 102 BC succeeded in gathering the prized tribute of horses from Fergana. Cavalry tactics in China were enhanced by the invention of the saddle-attached stirrup by at least the 4th century, as the oldest reliable depiction of a rider with paired stirrups was found in a Jin dynasty tomb of the year 322 AD. The Chinese invention of the horse collar by the 5th century was also a great improvement from the breast harness, allowing the horse to haul greater weight without heavy burden on its skeletal structure. Korea. The horse warfare of Korea was first started during the ancient Korean kingdom Gojoseon. Since at least the 3rd century BC, there was influence of northern nomadic peoples and Yemaek peoples on Korean warfare. By roughly the first century BC, the ancient kingdom of Buyeo also had mounted warriors. The cavalry of Goguryeo, one of the Three Kingdoms of Korea, were called "Gaemamusa" (개마무사, 鎧馬武士), and were renowned as a fearsome heavy cavalry force. King Gwanggaeto the Great often led expeditions into the Baekje, Gaya confederacy, Buyeo, Later Yan and against Japanese invaders with his cavalry. In the 12th century, Jurchen tribes began to violate the Goryeo–Jurchen borders, and eventually invaded Goryeo Korea. After experiencing invasion by the Jurchen, Korean general Yun Kwan realized that Goryeo lacked efficient cavalry units. He reorganized the Goryeo military into a professional army that would contain decent and well-trained cavalry units. In 1107, the Jurchen were ultimately defeated, and surrendered to Yun Kwan. To mark the victory, General Yun built nine fortresses to the northeast of the Goryeo–Jurchen borders (동북 9성, 東北 九城). Japan. The ancient Japanese of the Kofun period also adopted cavalry and equine culture by the 5th century AD. The emergence of the samurai aristocracy led to the development of armoured horse archers, themselves to develop into charging lancer cavalry as gunpowder weapons rendered bows obsolete. Japanese cavalry was largely made up of landowners who would be upon a horse to better survey the troops they were called upon to bring to an engagement, rather than traditional mounted warfare seen in other cultures with massed cavalry units. An example is Yabusame (流鏑馬), a type of mounted archery in traditional Japanese archery. An archer on a running horse shoots three special "turnip-headed" arrows successively at three wooden targets. This style of archery has its origins at the beginning of the Kamakura period. Minamoto no Yoritomo became alarmed at the lack of archery skills his samurai had. He organized yabusame as a form of practice. Currently, the best places to see yabusame performed are at the Tsurugaoka Hachiman-gū in Kamakura and Shimogamo Shrine in Kyoto (during Aoi Matsuri in early May). It is also performed in Samukawa and on the beach at Zushi, as well as other locations. Kasagake or Kasakake (笠懸, かさがけ lit. "hat shooting") is a type of Japanese mounted archery. In contrast to yabusame, the types of targets are various and the archer shoots without stopping the horse. While yabusame has been played as a part of formal ceremonies, kasagake has developed as a game or practice of martial arts, focusing on technical elements of horse archery. South Asia. Indian subcontinent. In the Indian subcontinent, cavalry played a major role from the Gupta dynasty (320–600) period onwards. India has also the oldest evidence for the introduction of toe-stirrups. Indian literature contains numerous references to the mounted warriors of the Central Asian horse nomads, notably the Sakas, Kambojas, Yavanas, Pahlavas and Paradas. Numerous Puranic texts refer to a conflict in ancient India (16th century BC) in which the horsemen of five nations, called the "Five Hordes" ("pañca.ganan") or Kṣatriya hordes ("Kṣatriya ganah"), attacked and captured the state of Ayudhya by dethroning its Vedic King Bahu The Mahabharata, Ramayana, numerous Puranas and some foreign sources attest that the Kamboja cavalry frequently played role in ancient wars. V. R. Ramachandra Dikshitar writes: "Both the Puranas and the epics agree that the horses of the Sindhu and Kamboja regions were of the finest breed, and that the services of the Kambojas as cavalry troopers were utilised in ancient wars". J.A.O.S. writes: "Most famous horses are said to come either from Sindhu or Kamboja; of the latter (i.e. the Kamboja), the Indian epic Mahabharata speaks among the finest horsemen". The Mahabharata speaks of the esteemed cavalry of the Kambojas, Sakas, Yavanas and Tusharas, all of whom had participated in the Kurukshetra war under the supreme command of Kamboja ruler Sudakshin Kamboj. Mahabharata and Vishnudharmottara Purana pay especial attention to the Kambojas, Yavansa, Gandharas etc. being "ashva.yuddha.kushalah" (expert cavalrymen). In the Mahabharata war, the Kamboja cavalry along with that of the Sakas, Yavanas is reported to have been enlisted by the Kuru king Duryodhana of Hastinapura. Herodotus ( – ) attests that the Gandarian mercenaries (i.e. "Gandharans/Kambojans" of Gandari Strapy of Achaemenids) from the 20th strapy of the Achaemenids were recruited in the army of emperor Xerxes I (486–465 BC), which he led against the Hellas. Similarly, the "men of the Mountain Land " from north of Kabul-River equivalent to medieval Kohistan (Pakistan), figure in the army of Darius III against Alexander at Arbela, providing a cavalry force and 15 elephants. This obviously refers to Kamboja cavalry south of Hindukush. The Kambojas were famous for their horses, as well as cavalrymen ("asva-yuddha-Kushalah"). On account of their supreme position in horse (Ashva) culture, they were also popularly known as Ashvakas, i.e. the "horsemen" and their land was known as "Home of Horses". They are the Assakenoi and Aspasioi of the Classical writings, and the Ashvakayanas and Ashvayanas in Pāṇini's Ashtadhyayi. The Assakenoi had faced Alexander with 30,000 infantry, 20,000 cavalry and 30 war elephants. Scholars have identified the Assakenoi and Aspasioi clans of Kunar and Swat valleys as a section of the Kambojas. These hardy tribes had offered stubborn resistance to Alexander () during latter's campaign of the Kabul, Kunar and Swat valleys and had even extracted the praise of the Alexander's historians. These highlanders, designated as "parvatiya Ayudhajivinah" in Pāṇini's Astadhyayi, were rebellious, fiercely independent and freedom-loving cavalrymen who never easily yielded to any overlord. The Sanskrit drama "Mudra-rakashas" by "Visakha Dutta" and the Jaina work "Parishishtaparvan" refer to Chandragupta's ( – ) alliance with Himalayan king "Parvataka". The Himalayan alliance gave Chandragupta a formidable composite army made up of the cavalry forces of the Shakas, Yavanas, Kambojas, Kiratas, Parasikas and Bahlikas as attested by Mudra-Rakashas (Mudra-Rakshasa 2). These hordes had helped Chandragupta Maurya defeat the ruler of Magadha and placed Chandragupta on the throne, thus laying the foundations of Mauryan dynasty in Northern India. The cavalry of Hunas and the Kambojas is also attested in the Raghu Vamsa epic poem of Sanskrit poet Kalidasa. Raghu of Kalidasa is believed to be Chandragupta II ("Vikaramaditya") (375–413/15 AD), of the well-known Gupta dynasty. As late as the mediaeval era, the Kamboja cavalry had also formed part of the Gurjara-Pratihara armed forces from the eighth to the 10th centuries AD. They had come to Bengal with the Pratiharas when the latter conquered part of the province. Ancient Kambojas organised military "sanghas" and shrenis (corporations) to manage their political and military affairs, as Arthashastra of Kautiliya as well as the Mahabharata record. They are described as "Ayuddha-jivi" or "Shastr-opajivis" (nations-in-arms), which also means that the Kamboja cavalry offered its military services to other nations as well. There are numerous references to Kambojas having been requisitioned as cavalry troopers in ancient wars by outside nations. Mughal Empire. The Mughal armies ("lashkar") were primarily a cavalry force. The elite corps were the "ahadi" who provided direct service to the Emperor and acted as guard cavalry. Supplementary cavalry or "dakhilis" were recruited, equipped and paid by the central state. This was in contrast to the "tabinan" horsemen who were the followers of individual noblemen. Their training and equipment varied widely but they made up the backbone of the Mughal cavalry. Finally there were tribal irregulars led by and loyal to tributary chiefs. These included Hindus, Afghans and Turks summoned for military service when their autonomous leaders were called on by the Imperial government. European Middle Ages. As the quality and availability of heavy infantry declined in Europe with the fall of the Roman Empire, heavy cavalry became more effective. Infantry that lack the cohesion and discipline of tight formations are more susceptible to being broken and scattered by shock combat—the main role of heavy cavalry, which rose to become the dominant force on the European battlefield. As heavy cavalry increased in importance, it became the main focus of military development. The arms and armour for heavy cavalry increased, the high-backed saddle developed, and stirrups and spurs were added, increasing the advantage of heavy cavalry even more. This shift in military importance was reflected in an increasingly hierarchical society as well. From the late 10th century onwards heavily armed horsemen, "milites" or knights, emerged as an expensive elite taking centre stage both on and off the battlefield. This class of aristocratic warriors was considered the "ultimate" in heavy cavalry: well-equipped with the best weapons, state-of-the-art armour from head to foot, leading with the lance in battle in a full-gallop, close-formation "knightly charge" that might prove irresistible, winning the battle almost as soon as it began. But knights remained the minority of total available combat forces; the expense of arms, armour, and horses was only affordable to a select few. While mounted men-at-arms focused on a narrow combat role of shock combat, medieval armies relied on a large variety of foot troops to fulfill all the rest (skirmishing, flank guards, scouting, holding ground, etc.). Medieval chroniclers tended to pay undue attention to the knights at the expense of the common soldiers, which led early students of military history to suppose that heavy cavalry was the only force that mattered on medieval European battlefields. But well-trained and disciplined infantry could defeat knights. Massed English longbowmen triumphed over French cavalry at Crécy, Poitiers and Agincourt, while at Gisors (1188), Bannockburn (1314), and Laupen (1339), foot-soldiers proved they could resist cavalry charges as long as they held their formation. Once the Swiss developed their pike squares for offensive as well as defensive use, infantry started to become the principal arm. This aggressive new doctrine gave the Swiss victory over a range of adversaries, and their enemies found that the only reliable way to defeat them was by the use of an even more comprehensive combined arms doctrine, as evidenced in the Battle of Marignano. The introduction of missile weapons that required less skill than the longbow, such as the crossbow and hand cannon, also helped remove the focus somewhat from cavalry elites to masses of cheap infantry equipped with easy-to-learn weapons. These missile weapons were very successfully used in the Hussite Wars, in combination with Wagenburg tactics. This gradual rise in the dominance of infantry led to the adoption of dismounted tactics. From the earliest times knights and mounted men-at-arms had frequently dismounted to handle enemies they could not overcome on horseback, such as in the Battle of the Dyle (891) and the Battle of Bremule (1119), but after the 1350s this trend became more marked with the dismounted men-at-arms fighting as super-heavy infantry with two-handed swords and poleaxes. In any case, warfare in the Middle Ages tended to be dominated by raids and sieges rather than pitched battles, and mounted men-at-arms rarely had any choice other than dismounting when faced with the prospect of assaulting a fortified position. Islamic States. Arabs. The Islamic Prophet Muhammad made use of cavalry in many of his military campaigns including the Expedition of Dhu Qarad, and the expedition of Zaid ibn Haritha in al-Is which took place in September, 627 AD, fifth month of 6 AH of the Islamic calendar. Early organized Arab mounted forces under the Rashidun caliphate comprised a light cavalry armed with lance and sword. Its main role was to attack the enemy flanks and rear. These relatively lightly armored horsemen formed the most effective element of the Muslim armies during the later stages of the Islamic conquest of the Levant. The best use of this lightly armed fast moving cavalry was revealed at the Battle of Yarmouk (636 AD) in which Khalid ibn Walid, knowing the skills of his horsemen, used them to turn the tables at every critical instance of the battle with their ability to engage, disengage, then turn back and attack again from the flank or rear. A strong cavalry regiment was formed by Khalid ibn Walid which included the veterans of the campaign of Iraq and Syria. Early Muslim historians have given it the name "Tali'a mutaharrikah"(طليعة متحركة), or the Mobile guard. This was used as an advance guard and a strong striking force to route the opposing armies with its greater mobility that give it an upper hand when maneuvering against any Byzantine army. With this mobile striking force, the conquest of Syria was made easy. The Battle of Talas in 751 AD was a conflict between the Arab Abbasid Caliphate and the Chinese Tang dynasty over the control of Central Asia. Chinese infantry were routed by Arab cavalry near the bank of the River Talas. Until the 11th century the classic cavalry strategy of the Arab Middle East incorporated the "razzia" tactics of fast moving raids by mixed bodies of horsemen and infantry. Under the talented leadership of Saladin and other Islamic commanders the emphasis changed to Mamluk horse-archers backed by bodies of irregular light cavalry. Trained to rapidly disperse, harass and regroup these flexible mounted forces proved capable of withstanding the previously invincible heavy knights of the western crusaders at battles such as Hattin in 1187. Mamluks. Originating in the 9th century as Central Asian "ghulams" or captives utilised as mounted auxiliaries by Arab armies, Mamluks were subsequently trained as cavalry soldiers rather than solely mounted-archers, with increased priority being given to the use of lances and swords. Mamluks were to follow the dictates of al-furusiyya, a code of conduct that included values like courage and generosity but also doctrine of cavalry tactics, horsemanship, archery and treatment of wounds. By the late 13th century the Manluk armies had evolved into a professional elite of cavalry, backed by more numerous but less well-trained footmen. Maghreb. The Islamic Berber states of North Africa employed elite horse mounted cavalry armed with spears and following the model of the original Arab occupiers of the region. Horse-harness and weapons were manufactured locally and the six-monthly stipends for horsemen were double those of their infantry counterparts. During the 8th century Islamic conquest of Iberia large numbers of horses and riders were shipped from North Africa, to specialise in raiding and the provision of support for the massed Berber footmen of the main armies. Maghrebi traditions of mounted warfare eventually influenced a number of sub-Saharan African polities in the medieval era. The Esos of Ikoyi, military aristocrats of the Yoruba peoples, were a notable manifestation of this phenomenon. Iran. Qizilbash, were a class of Safavid militant warriors in Iran during the 15th to 18th centuries, who often fought as elite cavalry. Ottoman. During its period of greatest expansion, from the 14th to 17th centuries, cavalry formed the powerful core of the Ottoman armies. Registers dated 1475 record 22,000 "Sipahi" feudal cavalry levied in Europe, 17,000 "Sipahis" recruited from Anatolia, and 3,000 "Kapikulu" (regular body-guard cavalry). During the 18th century however the Ottoman mounted troops evolved into light cavalry serving in the thinly populated regions of the Middle East and North Africa. Such frontier horsemen were largely raised by local governors and were separate from the main field armies of the Ottoman Empire. At the beginning of the 19th century modernised "Nizam-I Credit" ("New Army") regiments appeared, including full-time cavalry units officered from the horse guards of the Sultan. Renaissance Europe. Ironically, the rise of infantry in the early 16th century coincided with the "golden age" of heavy cavalry; a French or Spanish army at the beginning of the century could have up to half its numbers made up of various kinds of light and heavy cavalry, whereas in earlier medieval and later 17th-century armies the proportion of cavalry was seldom more than a quarter. Knighthood largely lost its military functions and became more closely tied to social and economic prestige in an increasingly capitalistic Western society. With the rise of drilled and trained infantry, the mounted men-at-arms, now sometimes called "gendarmes" and often part of the standing army themselves, adopted the same role as in the Hellenistic age, that of delivering a decisive blow once the battle was already engaged, either by charging the enemy in the flank or attacking their commander-in-chief. From the 1550s onwards, the use of gunpowder weapons solidified infantry's dominance of the battlefield and began to allow true mass armies to develop. This is closely related to the increase in the size of armies throughout the early modern period; heavily armored cavalrymen were expensive to raise and maintain and it took years to train a skilled horseman or a horse, while arquebusiers and later musketeers could be trained and kept in the field at much lower cost, and were much easier to recruit. The Spanish tercio and later formations relegated cavalry to a supporting role. The pistol was specifically developed to try to bring cavalry back into the conflict, together with manoeuvres such as the caracole. The caracole was not particularly successful, however, and the charge (whether with lance, sword, or pistol) remained as the primary mode of employment for many types of European cavalry, although by this time it was delivered in much deeper formations and with greater discipline than before. The demi-lancers and the heavily armored sword-and-pistol reiters were among the types of cavalry whose heyday was in the 16th and 17th centuries. During this period the Polish Winged hussars were a dominating heavy cavalry force in Eastern Europe that initially achieved great success against Swedes, Russians, Turks and other, until repeatably beaten by either combined arms tactics, increase in firepower or beaten in melee with the Drabant cavalry of the Swedish Empire. From their last engagement in 1702 (at the Battle of Kliszów) until 1776, the obsolete Winged hussars were demoted and largely assigned to ceremonial roles. The Polish Winged hussars military prowess peaked at the Siege of Vienna in 1683, when hussar banners participated in the largest cavalry charge in history and successfully repelled the Ottoman attack. 18th-century Europe and Napoleonic Wars. Cavalry retained an important role in this age of regularization and standardization across European armies. They remained the primary choice for confronting enemy cavalry. Attacking an unbroken infantry force head-on usually resulted in failure, but extended linear infantry formations were vulnerable to flank or rear attacks. Cavalry was important at Blenheim (1704), Rossbach (1757), Marengo (1800), Eylau and Friedland (1807), remaining significant throughout the Napoleonic Wars. Even with the increasing prominence of infantry, cavalry still had an irreplaceable role in armies, due to their greater mobility. Their non-battle duties often included patrolling the fringes of army encampments, with standing orders to intercept suspected shirkers and deserters, as well as, serving as outpost pickets in advance of the main body. During battle, lighter cavalry such as hussars and uhlans might skirmish with other cavalry, attack light infantry, or charge and either capture enemy artillery or render them useless by plugging the touchholes with iron spikes. Heavier cavalry such as cuirassiers, dragoons, and carabiniers usually charged towards infantry formations or opposing cavalry in order to rout them. Both light and heavy cavalry pursued retreating enemies, the point where most battle casualties occurred. The greatest cavalry charge of modern history was at the 1807 Battle of Eylau, when the entire 11,000-strong French cavalry reserve, led by Joachim Murat, launched a huge charge on and through the Russian infantry lines. Cavalry's dominating and menacing presence on the battlefield was countered by the use of infantry squares. The most notable examples are at the Battle of Quatre Bras and later at the Battle of Waterloo, the latter which the repeated charges by up to 9,000 French cavalrymen ordered by Michel Ney failed to break the British-Allied army, who had formed into squares. Massed infantry, especially those formed in squares were deadly to cavalry, but offered an excellent target for artillery. Once a bombardment had disordered the infantry formation, cavalry were able to rout and pursue the scattered foot soldiers. It was not until individual firearms gained accuracy and improved rates of fire that cavalry was diminished in this role as well. Even then light cavalry remained an indispensable tool for scouting, screening the army's movements, and harassing the enemy's supply lines until military aircraft supplanted them in this role in the early stages of World War I. 19th century. Europe. By the beginning of the 19th century, European cavalry fell into four main categories: There were cavalry variations for individual nations as well: France had the "chasseurs à cheval"; Prussia had the "Jäger zu Pferde"; Bavaria, Saxony and Austria had the "Chevaulegers"; and Russia had Cossacks. Britain, from the mid-18th century, had Light Dragoons as light cavalry and Dragoons, Dragoon Guards and Household Cavalry as heavy cavalry. Only after the end of the Napoleonic wars were the Household Cavalry equipped with cuirasses, and some other regiments were converted to lancers. In the United States Army prior to 1862 the cavalry were almost always dragoons. The Imperial Japanese Army had its cavalry uniformed as hussars, but they fought as dragoons. In the Crimean War, the Charge of the Light Brigade and the Thin Red Line at the Battle of Balaclava showed the vulnerability of cavalry, when deployed without effective support. Franco-Prussian War. During the Franco-Prussian War, at the Battle of Mars-la-Tour in 1870, a Prussian cavalry brigade decisively smashed the centre of the French battle line, after skilfully concealing their approach. This event became known as Von Bredow's Death Ride after the brigade commander Adalbert von Bredow; it would be used in the following decades to argue that massed cavalry charges still had a place on the modern battlefield. Imperial expansion. Cavalry found a new role in colonial campaigns (irregular warfare), where modern weapons were lacking and the slow moving infantry-artillery train or fixed fortifications were often ineffective against indigenous insurgents (unless the latter offered a fight on an equal footing, as at Tel-el-Kebir, Omdurman, etc.). Cavalry "flying columns" proved effective, or at least cost-effective, in many campaigns—although an astute native commander (like Samori in western Africa, Shamil in the Caucasus, or any of the better Boer commanders) could turn the tables and use the greater mobility of their cavalry to offset their relative lack of firepower compared with European forces. In 1903 the British Indian Army maintained forty regiments of cavalry, numbering about 25,000 Indian sowars (cavalrymen), with British and Indian officers. Among the more famous regiments in the lineages of the modern Indian and Pakistani armies are: Several of these formations are still active, though they now are armoured formations, for example the Guides Cavalry of Pakistan. The French Army maintained substantial cavalry forces in Algeria and Morocco from 1830 until the end of World War II. Much of the Mediterranean coastal terrain was suitable for mounted action and there was a long established culture of horsemanship amongst the Arab and Berber inhabitants. The French forces included Spahis, Chasseurs d' Afrique, Foreign Legion cavalry and mounted Goumiers. Both Spain and Italy raised cavalry regiments from amongst the indigenous horsemen of their North African territories (see regulares, Italian Spahis and savari respectively). Imperial Germany employed mounted formations in South West Africa as part of the Schutztruppen (colonial army) garrisoning that territory. United States. In the early American Civil War the regular United States Army mounted rifle, dragoon, and two existing cavalry regiments were reorganized and renamed cavalry regiments, of which there were six. Over a hundred other federal and state cavalry regiments were organized, but the infantry played a much larger role in many battles due to its larger numbers, lower cost per rifle fielded, and much easier recruitment. However, cavalry saw a role as part of screening forces and in foraging and scouting. The later phases of the war saw the Federal army developing a truly effective cavalry force fighting as scouts, raiders, and, with repeating rifles, as mounted infantry. The distinguished 1st Virginia Cavalry ranks as one of the most effectual and successful cavalry units on the Confederate side. Noted cavalry commanders included Confederate general J.E.B. Stuart, Nathan Bedford Forrest, and John Singleton Mosby (a.k.a. "The Grey Ghost") and on the Union side, Philip Sheridan and George Armstrong Custer. Post Civil War, as the volunteer armies disbanded, the regular army cavalry regiments increased in number from six to ten, among them Custer's U.S. 7th Cavalry Regiment of Little Bighorn fame, and the African-American U.S. 9th Cavalry Regiment and U.S. 10th Cavalry Regiment. The black units, along with others (both cavalry and infantry), collectively became known as the Buffalo Soldiers. According to Robert M. Utley: the frontier army was a conventional military force trying to control, by conventional military methods, a people that did not behave like conventional enemies and, indeed, quite often were not enemies at all. This is the most difficult of all military assignments, whether in Africa, Asia, or the American West. These regiments, which rarely took the field as complete organizations, served throughout the American Indian Wars through the close of the frontier in the 1890s. Volunteer cavalry regiments like the Rough Riders consisted of horsemen such as cowboys, ranchers and other outdoorsmen, that served as a cavalry in the United States Military. Developments 1900–1914. At the beginning of the 20th century, all armies still maintained substantial cavalry forces, although there was contention over whether their role should revert to that of mounted infantry (the historic dragoon function). With motorised vehicles and aircraft still under development, horse mounted troops remained the only fully mobile forces available for manoeuvre warfare until 1914. United Kingdom. Following the experience of the South African War of 1899–1902 (where mounted Boer citizen commandos fighting on foot from cover proved more effective than regular cavalry employed on horseback), the British Army withdrew lances for all but ceremonial purposes and placed a new emphasis on training for dismounted action in 1903. Lances were however readopted for active service in 1912. Russia. In 1882, the Imperial Russian Army converted all its line hussar and lancer regiments to dragoons, with an emphasis on mounted infantry training. In 1910 these regiments reverted to their historic roles, designations and uniforms. Germany. By 1909, official regulations dictating the role of the Imperial German cavalry had been revised to indicate an increasing realization of the realities of modern warfare. The massive cavalry charge in three waves which had previously marked the end of annual maneuvers was discontinued and a new emphasis was placed in training on scouting, raiding and pursuit; rather than main battle involvement. The perceived importance of cavalry was however still evident, with thirteen new regiments of mounted rifles ("Jäger zu Pferde") being raised shortly before the outbreak of war in 1914. France. In spite of significant experience in mounted warfare in Morocco during 1908–14, the French cavalry remained a highly conservative institution. The traditional tactical distinctions between heavy, medium, and light cavalry branches were retained. French cuirassiers wore breastplates and plumed helmets unchanged from the Napoleonic period, during the early months of World War I. Dragoons were similarly equipped, though they did not wear cuirasses and did carry lances. Light cavalry were described as being "a blaze of colour". French cavalry of all branches were well mounted and were trained to change position and charge at full gallop. One weakness in training was that French cavalrymen seldom dismounted on the march and their horses suffered heavily from raw backs in August 1914. First World War. Europe 1914. In August 1914, all combatant armies still retained substantial numbers of cavalry and the mobile nature of the opening battles on both Eastern and Western Fronts provided a number of instances of traditional cavalry actions, though on a smaller and more scattered scale than those of previous wars. The 110 regiments of Imperial German cavalry, while as colourful and traditional as any in peacetime appearance, had adopted a practice of falling back on infantry support when any substantial opposition was encountered. These cautious tactics aroused derision amongst their more conservative French and Russian opponents but proved appropriate to the new nature of warfare. A single attempt by the German army, on 12 August 1914, to use six regiments of massed cavalry to cut off the Belgian field army from Antwerp floundered when they were driven back in disorder by rifle fire. The two German cavalry brigades involved lost 492 men and 843 horses in repeated charges against dismounted Belgian lancers and infantry. One of the last recorded charges by French cavalry took place on the night of 9/10 September 1914 when a squadron of the 16th Dragoons overran a German airfield at Soissons, while suffering heavy losses. Once the front lines stabilised on the Western Front with the start of Trench Warfare, a combination of barbed wire, uneven muddy terrain, machine guns and rapid fire rifles proved deadly to horse mounted troops and by early 1915 most cavalry units were no longer seeing front line action. On the Eastern Front, a more fluid form of warfare arose from flat open terrain favorable to mounted warfare. On the outbreak of war in 1914 the bulk of the Russian cavalry was deployed at full strength in frontier garrisons and, during the period that the main armies were mobilizing, scouting and raiding into East Prussia and Austrian Galicia was undertaken by mounted troops trained to fight with sabre and lance in the traditional style. On 21 August 1914 the 4th Austro-Hungarian under clashed with the Russian 10th Cavalry Division under general Fyodor Arturovich Keller in the Battle of Jaroslawice, in what was arguably the final historic battle to involve thousands of horsemen on both sides. While this was the last massed cavalry encounter on the Eastern Front, the absence of good roads limited the use of mechanized transport and even the technologically advanced Imperial German Army continued to deploy up to twenty-four horse-mounted divisions in the East, as late as 1917. Europe 1915–1918. For the remainder of the War on the Western Front, cavalry had virtually no role to play. The British and French armies dismounted many of their cavalry regiments and used them in infantry and other roles: the Life Guards for example spent the last months of the War as a machine gun corps; and the Australian Light Horse served as light infantry during the Gallipoli campaign. In September 1914 cavalry comprised 9.28% of the total manpower of the British Expeditionary Force in France—by July 1918 this proportion had fallen to 1.65%. As early as the first winter of the war most French cavalry regiments had dismounted a squadron each, for service in the trenches. The French cavalry numbered 102,000 in May 1915 but had been reduced to 63,000 by October 1918. The German Army dismounted nearly all their cavalry in the West, maintaining only one mounted division on that front by January 1917. Italy entered the war in 1915 with thirty regiments of line cavalry, lancers and light horse. While employed effectively against their Austro-Hungarian counterparts during the initial offensives across the Isonzo River, the Italian mounted forces ceased to have a significant role as the front shifted into mountainous terrain. By 1916 most cavalry machine-gun sections and two complete cavalry divisions had been dismounted and seconded to the infantry. Some cavalry were retained as mounted troops in reserve behind the lines, in anticipation of a penetration of the opposing trenches that it seemed would never come. Tanks, introduced on the Western Front by the British in September 1916 during the Battle of the Somme, had the capacity to achieve such breakthroughs but did not have the reliable range to exploit them. In their first major use at the Battle of Cambrai (1917), the plan was for a cavalry division to follow behind the tanks, however they were not able to cross a canal because a tank had broken the only bridge. On a few other occasions, throughout the war, cavalry were readied in significant numbers for involvement in major offensives; such as in the Battle of Caporetto and the Battle of Moreuil Wood. However it was not until the German Army had been forced to retreat in the Hundred Days Offensive of 1918, that limited numbers of cavalry were again able to operate with any effectiveness in their intended role. There was a successful charge by the British 7th Dragoon Guards on the last day of the war. In the wider spaces of the Eastern Front, a more fluid form of warfare continued and there was still a use for mounted troops. Some wide-ranging actions were fought, again mostly in the early months of the war. However, even here the value of cavalry was overrated and the maintenance of large mounted formations at the front by the Russian Army put a major strain on the railway system, to little strategic advantage. In February 1917, the Russian regular cavalry (exclusive of Cossacks) was reduced by nearly a third from its peak number of 200,000, as two squadrons of each regiment were dismounted and incorporated into additional infantry battalions. Their Austro-Hungarian opponents, plagued by a shortage of trained infantry, had been obliged to progressively convert most horse cavalry regiments to dismounted rifle units starting in late 1914. Middle East. In the Middle East, during the Sinai and Palestine Campaign mounted forces (British, Indian, Ottoman, Australian, Arab and New Zealand) retained an important strategic role both as mounted infantry and cavalry. In Egypt, the mounted infantry formations like the New Zealand Mounted Rifles Brigade and Australian Light Horse of ANZAC Mounted Division, operating as mounted infantry, drove German and Ottoman forces back from Romani to Magdhaba and Rafa and out of the Egyptian Sinai Peninsula in 1916. After a stalemate on the Gaza–Beersheba line between March and October 1917, Beersheba was captured by the Australian Mounted Division's 4th Light Horse Brigade. Their mounted charge succeeded after a coordinated attack by the British Infantry and Yeomanry cavalry and the Australian and New Zealand Light Horse and Mounted Rifles brigades. A series of coordinated attacks by these Egyptian Expeditionary Force infantry and mounted troops were also successful at the Battle of Mughar Ridge, during which the British infantry divisions and the Desert Mounted Corps drove two Ottoman armies back to the Jaffa—Jerusalem line. The infantry with mainly dismounted cavalry and mounted infantry fought in the Judean Hills to eventually almost encircle Jerusalem which was occupied shortly after. During a pause in operations necessitated by the German spring offensive in 1918 on the Western Front, joint infantry and mounted infantry attacks towards Amman and Es Salt resulted in retreats back to the Jordan Valley which continued to be occupied by mounted divisions during the summer of 1918. The Australian Mounted Division was armed with swords and in September, after the successful breaching of the Ottoman line on the Mediterranean coast by the British Empire infantry XXI Corps was followed by cavalry attacks by the 4th Cavalry Division, 5th Cavalry Division and Australian Mounted Divisions which almost encircled two Ottoman armies in the Judean Hills forcing their retreat. Meanwhile, Chaytor's Force of infantry and mounted infantry in ANZAC Mounted Division held the Jordan Valley, covering the right flank to later advance eastwards to capture Es Salt and Amman and half of a third Ottoman army. A subsequent pursuit by the 4th Cavalry Division and the Australian Mounted Division followed by the 5th Cavalry Division to Damascus. Armoured cars and 5th Cavalry Division lancers were continuing the pursuit of Ottoman units north of Aleppo when the Armistice of Mudros was signed by the Ottoman Empire. Post–World War I. A combination of military conservatism in almost all armies and post-war financial constraints prevented the lessons of 1914–1918 being acted on immediately. There was a general reduction in the number of cavalry regiments in the British, French, Italian and other Western armies but it was still argued with conviction (for example in the 1922 edition of the "Encyclopædia Britannica") that mounted troops had a major role to play in future warfare. The 1920s saw an interim period during which cavalry remained as a conspicuous element of all major armies, though much less so than prior to 1914. Cavalry was extensively used in the Russian Civil War and the Soviet-Polish War. The last major cavalry battle was the Battle of Komarów in 1920, between Poland and the Russian Bolsheviks. Colonial warfare in Morocco, Syria, the Middle East and the North West Frontier of India provided some opportunities for mounted action against enemies lacking advanced weaponry. The post-war German Army (Reichsheer) was permitted a large proportion of cavalry (18 regiments or 16.4% of total manpower) under the conditions of the Treaty of Versailles. The British Army mechanised all cavalry regiments between 1929 and 1941, redefining their role from horse to armoured vehicles to form the Royal Armoured Corps together with the Royal Tank Regiment. The U.S. Cavalry abandoned its sabres in 1934 and commenced the conversion of its horsed regiments to mechanised units, starting with the First Regiment of Cavalry in January 1933. During the Turkish War of Independence, Turkish cavalry under General Fahrettin Altay was instrumental in the Kemalist victory over the invading Greek Army in 1922 during the Battle of Dumlupınar. The 5th Cavalry Corps was able to slip behind the main Greek army, cutting off all communication and supply lines as well as retreat options. This forced the surrender of the remaining Greek forces and may have been the last time in history that cavalry played a definitive role in the outcome of a battle. During the 1930s, the French Army experimented with integrating mounted and mechanised cavalry units into larger formations. Dragoon regiments were converted to motorised infantry (trucks and motor cycles), and cuirassiers to armoured units; while light cavalry (chasseurs a' cheval, hussars and spahis) remained as mounted sabre squadrons. The theory was that mixed forces comprising these diverse units could utilise the strengths of each according to circumstances. In practice mounted troops proved unable to keep up with fast moving mechanised units over any distance. The 39 cavalry regiments of the British Indian Army were reduced to 21 as the result of a series of amalgamations immediately following World War I. The new establishment remained unchanged until 1936 when three regiments were redesignated as permanent training units, each with six, still mounted, regiments linked to them. In 1938, the process of mechanization began with the conversion of a full cavalry brigade (two Indian regiments and one British) to armoured car and tank units. By the end of 1940, all of the Indian cavalry had been mechanized, initially and in the majority of cases, to motorized infantry transported in 15cwt trucks. The last horsed regiment of the British Indian Army (other than the Viceroy's Bodyguard and some Indian States Forces regiments) was the 19th King George's Own Lancers which had its final mounted parade at Rawalpindi on 28 October 1939. This unit still exists in the Pakistan Army as an armored regiment. World War II. While most armies still maintained cavalry units at the outbreak of World War II in 1939, significant mounted action was largely restricted to the Polish, Balkan, and Soviet campaigns. Rather than charge their mounts into battle, cavalry units were either used as mounted infantry (using horses to move into position and then dismounting for combat) or as reconnaissance units (especially in areas not suited to tracked or wheeled vehicles). Polish. A popular myth is that Polish cavalry armed with lances charged German tanks during the September 1939 campaign. This arose from misreporting of a single clash on 1 September near Krojanty, when two squadrons of the Polish 18th Lancers armed with sabres scattered German infantry before being caught in the open by German armoured cars. Two examples illustrate how the myth developed. First, because motorised vehicles were in short supply, the Poles used horses to pull anti-tank weapons into position. Second, there were a few incidents when Polish cavalry was trapped by German tanks, and attempted to fight free. However, this did not mean that the Polish army chose to attack tanks with horse cavalry. Later, on the Eastern Front, the Red Army did deploy cavalry units effectively against the Germans. A more correct term would be "mounted infantry" instead of "cavalry", as horses were primarily used as a means of transportation, for which they were very suitable in view of the very poor road conditions in pre-war Poland. Another myth describes Polish cavalry as being armed with both sabres and lances; lances were used for peacetime ceremonial purposes only and the primary weapon of the Polish cavalryman in 1939 was a rifle. Individual equipment did include a sabre, probably because of well-established tradition, and in the case of a melee combat this secondary weapon would probably be more effective than a rifle and bayonet. Moreover, the Polish cavalry brigade order of battle in 1939 included, apart from the mounted soldiers themselves, light and heavy machine guns (wheeled), the Anti-tank rifle, model 35, anti-aircraft weapons, anti tank artillery such as the Bofors 37 mm, also light and scout tanks, etc. The last cavalry vs. cavalry mutual charge in Europe took place in Poland during the Battle of Krasnobród, when Polish and German cavalry units clashed with each other. The last classical cavalry charge of the war took place on March 1, 1945, during the Battle of Schoenfeld by the 1st "Warsaw" Independent Cavalry Brigade. Infantry and tanks had been employed to little effect against the German position, both of which floundered in the open wetlands only to be dominated by infantry and antitank fire from the German fortifications on the forward slope of Hill 157, overlooking the wetlands. The Germans had not taken cavalry into consideration when fortifying their position which, combined with the "Warsaw"s swift assault, overran the German anti-tank guns and consolidated into an attack into the village itself, now supported by infantry and tanks. Greek. The Italian invasion of Greece in October 1940 saw mounted cavalry used effectively by the Greek defenders along the mountainous frontier with Albania. Three Greek cavalry regiments (two mounted and one partially mechanized) played an important role in the Italian defeat in this difficult terrain. Soviet. The contribution of Soviet cavalry to the development of modern military operational doctrine and its importance in defeating Nazi Germany has been eclipsed by the higher profile of tanks and airplanes. Soviet cavalry contributed significantly to the defeat of the Axis armies. They were able to provide the most mobile troops available in the early stages, when trucks and other equipment were low in quality; as well as providing cover for retreating forces. Considering their relatively limited numbers, the Soviet cavalry played a significant role in giving Germany its first real defeats in the early stages of the war. The continuing potential of mounted troops was demonstrated during the Battle of Moscow, against Guderian and the powerful central German 9th Army. Pavel Belov was given by Stavka a mobile group including the elite 9th tank brigade, ski battalions, Katyusha rocket launcher battalion among others, the unit additionally received new weapons. This newly created group became the first to carry the Soviet counter-offensive in late November, when the general offensive began on 5 December. These mobile units often played major roles in both defensive and offensive operations. Cavalry were amongst the first Soviet units to complete the encirclement in the Battle of Stalingrad, thus sealing the fate of the German 6th Army. Mounted Soviet forces also played a role in the encirclement of Berlin, with some Cossack cavalry units reaching the Reichstag in April 1945. Throughout the war they performed important tasks such as the capture of bridgeheads which is considered one of the hardest jobs in battle, often doing so with inferior numbers. For instance the 8th Guards Cavalry Regiment of the 2nd Guards Cavalry Division (Soviet Union), 1st Guards Cavalry Corps often fought outnumbered against elite German units. By the final stages of the war only the Soviet Union was still fielding mounted units in substantial numbers, some in combined mechanized and horse units. The main advantage of this tactical approach was in enabling mounted infantry to keep pace with advancing tanks. Other factors favoring the retention of mounted forces included the high quality of Russian Cossacks, which provided about half of all mounted Soviet cavalry throughout the war. They excelled in warfare manoeuvers, since the lack of roads limited the effectiveness of wheeled vehicles in many parts of the Eastern Front. Another consideration was that sufficient logistic capacity was often not available to support very large motorized forces, whereas cavalry was relatively easy to maintain when detached from the main army and acting on its own initiative. The main usage of the Soviet cavalry involved infiltration through front lines with subsequent deep raids, which disorganized German supply lines. Another role was the pursuit of retreating enemy forces during major front-line operations and breakthroughs. Hungarian. During World War II, the Royal Hungarian Army's hussars were typically only used to undertake reconnaissance tasks against Soviet forces, and then only in detachments of section or squadron strength. The last documented hussar attack was conducted by Lieutenant Colonel Kálmán Mikecz on August 16, 1941, at Nikolaev. The hussars arriving as reinforcements, were employed to break through Russian positions ahead of German troops. The hussars equipped with swords and submachine guns broke through the Russian lines in a single attack. An eyewitness account of the last hussar attack by Erich Kern, a German officer, was written in his memoir in 1948: … We were again in a tough fight with the desperately defensive enemy who dug himself along a high railway embankment. We've been attacked four times already, and we've been kicked back all four times. The battalion commander swore, but the company commanders were helpless. Then, instead of the artillery support we asked for countless times, a Hungarian hussar regiment appeared on the scene. We laughed. What the hell do they want here with their graceful, elegant horses? We froze at once: these Hungarians went crazy. Cavalry Squadron approached after a cavalry squadron. The command word rang. The bronze-brown, slender riders almost grew to their saddle. Their shining colonel of golden parolis jerked his sword. Four or five armored cars cut out of the wings, and the regiment slashed across the wide plain with flashing swords in the afternoon sun. Seydlitz attacked like this once before. Forgetting all caution, we climbed out of our covers. It was all like a great equestrian movie. The first shots rumbled, then became less frequent. With astonished eyes, in disbelief, we watched as the Soviet regiment, which had so far repulsed our attacks with desperate determination, now turned around and left its positions in panic. And the triumphant Hungarians chased the Russian in front of them and shredded them with their glittering sabers. The hussar sword, it seems, was a bit much for the nerves of Russians. Now, for once, the ancient weapon has triumphed over modern equipment ... Italian. The last mounted sabre charge by Italian cavalry occurred on August 24, 1942, at Isbuscenski (Russia), when a squadron of the Savoia Cavalry Regiment charged the 812th Siberian Infantry Regiment. The remainder of the regiment, together with the Novara Lancers made a dismounted attack in an action that ended with the retreat of the Russians after heavy losses on both sides. The final Italian cavalry action occurred on October 17, 1942, in Poloj (now Croatia) by a squadron of the Alexandria Cavalry Regiment against a large group of Yugoslav partisans. Other Axis Powers. Romanian, Hungarian and Italian cavalry were dispersed or disbanded following the retreat of the Axis forces from Russia. Germany still maintained some mounted (mixed with bicycles) SS and Cossack units until the last days of the War. Finnish. Finland used mounted troops against Russian forces effectively in forested terrain during the Continuation War. The last Finnish cavalry unit was not disbanded until 1947. American. The U.S. Army's last horse cavalry actions were fought during World War II: a) by the 26th Cavalry Regiment—a small mounted regiment of Philippine Scouts which fought the Japanese during the retreat down the Bataan peninsula, until it was effectively destroyed by January 1942; and b) on captured German horses by the mounted reconnaissance section of the U.S. 10th Mountain Division in a spearhead pursuit of the German Army across the Po Valley in Italy in April 1945. The last horsed U.S. Cavalry (the Second Cavalry Division) were dismounted in March 1944. British. All British Army cavalry regiments had been mechanised since 1 March 1942 when the Queen's Own Yorkshire Dragoons (Yeomanry) was converted to a motorised role, following mounted service against the Vichy French in Syria the previous year. The final cavalry charge by British Empire forces occurred on 21 March 1942 when a 60 strong patrol of the Burma Frontier Force encountered Japanese infantry near Toungoo airfield in central Myanmar. The Sikh sowars of the Frontier Force cavalry, led by Captain Arthur Sandeman of The Central India Horse (21st King George V's Own Horse), charged in the old style with sabres and most were killed. Mongolian. In the early stages of World War II, mounted units of the Mongolian People's Army were involved in the Battle of Khalkhin Gol against invading Japanese forces. Soviet forces under the command of Georgy Zhukov, together with Mongolian forces, defeated the Japanese Sixth army and effectively ended the Soviet–Japanese Border Wars. After the Soviet–Japanese Neutrality Pact of 1941, Mongolia remained neutral throughout most of the war, but its geographical situation meant that the country served as a buffer between Japanese forces and the Soviet Union. In addition to keeping around 10% of the population under arms, Mongolia provided half a million trained horses for use by the Soviet Army. In 1945 a partially mounted Soviet-Mongolian Cavalry Mechanized Group played a supporting role on the western flank of the Soviet invasion of Manchuria. The last active service seen by cavalry units of the Mongolian Army occurred in 1946–1948, during border clashes between Mongolia and the Republic of China. Post–World War II to the present day. While most modern "cavalry" units have some historic connection with formerly mounted troops this is not always the case. The modern Irish Defence Forces (DF) includes a "Cavalry Corps" equipped with armoured cars and Scorpion tracked combat reconnaissance vehicles. The DF has never included horse cavalry since its establishment in 1922 (other than a small mounted escort of Blue Hussars drawn from the Artillery Corps when required for ceremonial occasions). However, the mystique of the cavalry is such that the name has been introduced for what was always a mechanised force. Some engagements in late 20th and early 21st century guerrilla wars involved mounted troops, particularly against partisan or guerrilla fighters in areas with poor transport infrastructure. Such units were not used as cavalry but rather as mounted infantry. Examples occurred in Afghanistan, Portuguese Africa and Rhodesia. The French Army used existing mounted squadrons of Spahis to a limited extent for patrol work during the Algerian War (1954–1962). The last mounted charge by French cavalry was carried out on 14 May 1957 by a detachment of Spahis at Magoura during the Algerian War. The Swiss Army maintained a mounted dragoon regiment for combat purposes until 1973. The Portuguese Army used horse mounted cavalry with some success in the wars of independence in Angola and Mozambique in the 1960s and 1970s. During the 1964–1979 Rhodesian Bush War the Rhodesian Army created an elite mounted infantry unit called Grey's Scouts to patrol the country's borders and fight nationalist guerrilla units. It was retained for several years into the 1980s following Rhodesia's transition to become Zimbabwe. In the 1978 to present Afghan Civil War period there have been several instances of horse mounted combat. Central and South American armies maintained mounted cavalry for longer than those of Asia, Europe, or North America. The Mexican Army included a number of horse mounted cavalry regiments as late as the mid-1990s and the Chilean Army had five such regiments in 1983 as mounted mountain troops. After the end of World War II, the remaining 26 Soviet cavalry divisions were mostly converted into mechanized and tank units or disbanded. Meanwhile the overall Red Army became the Soviet Ground Forces in 1945. The last cavalry divisions were not disbanded until the early 1950s, with the last cavalry division, the 4th Guards Cavalry Division (II Formation, previously reduced in status from 4th Guards Cavalry Corps), being disbanded in April 1955. Operational horse cavalry. Today the Indian Army's 61st Cavalry is reported to be the largest existing horse-mounted cavalry unit still having operational potential. It was raised in 1951 from the amalgamated state cavalry squadrons of Gwalior, Jodhpur, and Mysore. While primarily utilised for ceremonial purposes, the regiment can be deployed for internal security or police roles if required. The 61st Cavalry and the President's Body Guard parade in full dress uniform in New Delhi each year in what is probably the largest assembly of traditional cavalry still to be seen in the world. Both the Indian and the Pakistani armies maintain armoured regiments with the titles of Lancers or Horse, dating back to the 19th century. As of 2007, the Chinese People's Liberation Army employed two battalions of horse-mounted border guards in Xinjiang for border patrol purposes. PLA mounted units last saw action during border clashes with Vietnam in the 1970s and 1980s, after which most cavalry units were disbanded as part of major military downsizing in the 1980s. In the wake of the 2008 Sichuan earthquake, there were calls to rebuild the army horse inventory for disaster relief in difficult terrain. Subsequent Chinese media reports confirm that the PLA maintains operational horse cavalry at squadron strength in Xinjiang and Inner Mongolia for scouting, logistical, and border security purposes, and one at company strength in Qinghai. The Chilean Army still maintains a mixed armoured cavalry regiment, with elements of it acting as mounted mountain exploration troops, based in the city of Angol, being part of the and another independent exploration cavalry detachment in the town of Chaitén. The rugged mountain terrain calls for the use of special horses suited for that use. The Argentine Army has two mounted cavalry units: the Regiment of Horse Grenadiers, which performs mostly ceremonial duties but at the same time is responsible for the president's security (in this case, acting as infantry), and the 4th Mountain Cavalry Regiment (which comprises both horse and light armoured squadrons), stationed in San Martín de los Andes, where it has an exploration role as part the 6th Mountain Brigade. Most armoured cavalry units of the Army are considered successors to the old cavalry regiments from the Independence Wars, and keep their traditional names, such as Hussars, Cuirassiers, Lancers, etc., and uniforms. Equestrian training remains an important part of their tradition, especially among officers. Ceremonial horse cavalry and armored cavalry retaining traditional titles. Cavalry or mounted gendarmerie units continue to be maintained for purely or primarily ceremonial purposes by the Algerian, Argentine, Bolivian, Brazilian, British, Bulgarian, Canadian, Chilean, Colombian, Danish, Dutch, Finnish, French, Hungarian, Indian, Italian, Jordanian, Malaysian, Mongolian Moroccan, Nepalese, Nigerian, North Korean, Omani, Pakistani, Panamanian, Paraguayan, Peruvian, Polish, Portuguese, Russian, Senegalese, Spanish, Swedish, Thai, Tunisian, Turkmenistan, United States, Uruguayan and Venezuelan armed forces. A number of armoured regiments in the British Army retain the historic designations of Hussars, Dragoons, Light Dragoons, Dragoon Guards, Lancers and Yeomanry. Only the Household Cavalry (consisting of the Life Guards' mounted squadron, The Blues and Royals' mounted squadron, the State Trumpeters of The Household Cavalry and the Household Cavalry Mounted Band) are maintained for mounted (and dismounted) ceremonial duties in London. The French Army still has regiments with the historic designations of Cuirassiers, Hussars, Chasseurs, Dragoons and Spahis. Only the cavalry of the Republican Guard and a ceremonial "fanfare" detachment of trumpeters for the cavalry/armoured branch as a whole are now mounted. In the Canadian Army, a number of regular and reserve units have cavalry roots, including The Royal Canadian Hussars (Montreal), the Governor General's Horse Guards, Lord Strathcona's Horse, The British Columbia Dragoons , The Royal Canadian Dragoons, and the South Alberta Light Horse. Of these, only Lord Strathcona's Horse and the Governor General's Horse Guards maintain an official ceremonial horse-mounted cavalry troop or squadron. The modern Pakistan army maintains about 40 armoured regiments with the historic titles of Lancers, Cavalry or Horse. Six of these date back to the 19th century, although only the President's Body Guard remains horse-mounted. In 2002, the Army of the Russian Federation reintroduced a ceremonial mounted squadron wearing historic uniforms. Both the Australian and New Zealand armies follow the British practice of maintaining traditional titles (Light Horse or Mounted Rifles) for modern mechanised units. However, neither country retains a horse-mounted unit. Several armored units of the modern United States Army retain the designation of "armored cavalry". The United States also has "air cavalry" units equipped with helicopters. The Horse Cavalry Detachment of the U.S. Army's 1st Cavalry Division, made up of active duty soldiers, still functions as an active unit, trained to approximate the weapons, tools, equipment and techniques used by the United States Cavalry in the 1880s. The Turkish Armed Forces retain a ceremonial cavalry regiment, which also participates in equestrianism, following the disbandment of the operational mounted brigades during the 1960s. Non-combat support roles. The First Troop Philadelphia City Cavalry is a volunteer unit within the Pennsylvania Army National Guard which serves as a combat force when in federal service but acts in a mounted disaster relief role when in state service. In addition, the Parsons' Mounted Cavalry is a Reserve Officer Training Corps unit which forms part of the Corps of Cadets at Texas A&M University. Valley Forge Military Academy and College also has a Mounted Company, known as D-Troop . Some individual U.S. states maintain cavalry units as a part of their respective state defense forces. The Maryland Defense Force includes a cavalry unit, Cavalry Troop A, which serves primarily as a ceremonial unit. The unit training includes a saber qualification course based upon the 1926 U.S. Army course. Cavalry Troop A also assists other Maryland agencies as a rural search and rescue asset. In Massachusetts, The National Lancers trace their lineage to a volunteer cavalry militia unit established in 1836 and are currently organized as an official part of the Massachusetts Organized Militia. The National Lancers maintain three units, Troops A, B, and C, which serve in a ceremonial role and assist in search and rescue missions. In July 2004, the National Lancers were ordered into active state service to guard Camp Curtis Guild during the 2004 Democratic National Convention. The Governor's Horse Guard of Connecticut maintains two companies which are trained in urban crowd control. In 2020, the California State Guard stood up the 26th Mounted Operations Detachment, a search-and-rescue cavalry unit. Social status. From the beginning of civilization to the 20th century, ownership of heavy cavalry horses has been a mark of wealth amongst settled peoples. A cavalry horse involves considerable expense in breeding, training, feeding, and equipment, and has very little productive use except as a mode of transport. For this reason, and because of their often decisive military role, the cavalry has typically been associated with high social status. This was most clearly seen in the feudal system, where a lord was expected to enter combat armored and on horseback and bring with him an entourage of lightly armed peasants on foot. If landlords and peasant levies came into conflict, the poorly trained footmen would be ill-equipped to defeat armored knights. In later national armies, service as an officer in the cavalry was generally a badge of high social status. For instance prior to 1914 most officers of British cavalry regiments came from a socially privileged background and the considerable expenses associated with their role generally required private means, even after it became possible for officers of the line infantry regiments to live on their pay. Options open to poorer cavalry officers in the various European armies included service with less fashionable (though often highly professional) frontier or colonial units. These included the British Indian cavalry, the Russian Cossacks or the French Chasseurs d'Afrique. During the 19th and early 20th centuries most monarchies maintained a mounted cavalry element in their royal or imperial guards. These ranged from small units providing ceremonial escorts and palace guards, through to large formations intended for active service. The mounted escort of the Spanish Royal Household provided an example of the former and the twelve cavalry regiments of the Prussian Imperial Guard an example of the latter. In either case the officers of such units were likely to be drawn from the aristocracies of their respective societies. On film. Some sense of the noise and power of a cavalry charge can be gained from the 1970 film "Waterloo", which featured some 2,000 cavalrymen, some of them Cossacks. It included detailed displays of the horsemanship required to manage animal and weapons in large numbers at the gallop (unlike the real battle of Waterloo, where deep mud significantly slowed the horses). The Gary Cooper movie "They Came to Cordura" contains a scene of a cavalry regiment deploying from march to battle line formation. A smaller-scale cavalry charge can be seen in "" (2003); although the finished scene has substantial computer-generated imagery, raw footage and reactions of the riders are shown in the Extended Version DVD Appendices. Other films that show cavalry actions include:
6818
27823944
https://en.wikipedia.org/wiki?curid=6818
Citric acid cycle
The citric acid cycle—also known as the Krebs cycle, Szent–Györgyi–Krebs cycle, or TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions that release the energy stored in nutrients through acetyl-CoA oxidation. The energy released is available in the form of ATP. The Krebs cycle is used by organisms that generate energy via respiration, either anaerobically or aerobically (organisms that ferment use different pathways). In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, which are used in other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest metabolism components. Even though it is branded as a "cycle", it is not necessary for metabolites to follow a specific route; at least three alternative pathways of the citric acid cycle are recognized. Its name is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions. The cycle consumes acetate (in the form of acetyl-CoA) and water and reduces NAD+ to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP. In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion. For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP. Discovery. Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mincer and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the "Krebs cycle". Overview. The citric acid cycle is a metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD+) into three equivalents of reduced NAD (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP. One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme: The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle: Steps. There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table. Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the citric acid cycle and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain. Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix. The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP). Products. Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2. Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2. The above reactions are balanced if Pi represents the H2PO4− ion, ADP and GDP the ADP2− and GDP2− ions, respectively, and ATP and GTP the ATP3− and GTP3− ions, respectively. The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38. Efficiency. The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and two equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. If transported using the glycerol phosphate shuttle rather than the malate–aspartate shuttle, transport of two of these equivalents of NADH into the mitochondria effectively consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule. Variation. While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant). Some differences exist between eukaryotes and prokaryotes. The conversion of D-"threo"-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.41, while prokaryotes employ the NADP+-dependent EC 1.1.1.42. Similarly, the conversion of ("S")-malate to oxaloacetate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4. A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as "Acetobacter aceti", an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as "Helicobacter pylori", employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5). Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD+-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase. In cancer. In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. Regulation. Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%. Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme. Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway. Transcriptional regulation. There is a link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF. Major metabolic pathways converging on the citric acid cycle. Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to "fill up". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed "cataplerotic" reactions. In this section and in the next, the citric acid cycle intermediates are indicated in "italics" to distinguish them from other substrates and end-products. Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, "acetyl-CoA", and NADH, as in the normal cycle. However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form "oxaloacetate". This latter reaction "fills up" the amount of "oxaloacetate" in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize "acetyl-CoA" when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. "citrate", "iso-citrate", "alpha-ketoglutarate", "succinate", "fumarate", "malate", and "oxaloacetate") are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of "oxaloacetate" available to combine with "acetyl-CoA" to form "citric acid". This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. "Acetyl-CoA", on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of "acetyl-CoA" is consumed for every molecule of "oxaloacetate" present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of "acetyl-CoA" that produces CO2 and water, with the energy thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial "oxaloacetate" is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of "oxaloacetate" to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate ("malate") is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis. In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. "alpha-ketoglutarate" derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into "acetyl-CoA" which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed "ketogenic" amino acids, whereas those that enter the citric acid cycle as intermediates can only be cataplerotically removed by entering the gluconeogenic pathway via "malate" which is transported out of the mitochondrion to be converted into cytosolic oxaloacetate and ultimately into glucose. These are the so-called "glucogenic" amino acids. De-aminated alanine, cysteine, glycine, serine, and threonine are converted to pyruvate and can consequently either enter the citric acid cycle as "oxaloacetate" (an anaplerotic reaction) or as "acetyl-CoA" to be disposed of as CO2 and water. In fat catabolism, triglycerides are hydrolyzed to break them into fatty acids and glycerol. In the liver the glycerol can be converted into glucose via dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by way of gluconeogenesis. In skeletal muscle, glycerol is used in glycolysis by converting glycerol into glycerol-3-phosphate, then into dihydroxyacetone phosphate (DHAP), then into glyceraldehyde-3-phosphate. In many tissues, especially heart and skeletal muscle tissue, fatty acids are broken down through a process known as beta oxidation, which results in the production of mitochondrial "acetyl-CoA", which can be used in the citric acid cycle. Beta oxidation of fatty acids with an odd number of methylene bridges produces propionyl-CoA, which is then converted into "succinyl-CoA" and fed into the citric acid cycle as an anaplerotic intermediate. The total energy gained from the complete breakdown of one (six-carbon) molecule of glucose by glycolysis, the formation of 2 "acetyl-CoA" molecules, their catabolism in the citric acid cycle, and oxidative phosphorylation equals about 30 ATP molecules, in eukaryotes. The number of ATP molecules derived from the beta oxidation of a 6 carbon segment of a fatty acid chain, and the subsequent oxidation of the resulting 3 molecules of "acetyl-CoA" is 40. Citric acid cycle intermediates serve as substrates for biosynthetic processes. In this subheading, as in the previous one, the TCA intermediates are identified by "italics". Several of the citric acid cycle intermediates are used for the synthesis of important compounds, which will have significant cataplerotic effects on the cycle. "Acetyl-CoA" cannot be transported out of the mitochondrion. To obtain cytosolic acetyl-CoA, "citrate" is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as "malate" (and then converted back into "oxaloacetate" to transfer more "acetyl-CoA" out of the mitochondrion). The cytosolic acetyl-CoA is used for fatty acid synthesis and the production of cholesterol. Cholesterol can, in turn, be used to synthesize the steroid hormones, bile salts, and vitamin D. The carbon skeletons of many non-essential amino acids are made from citric acid cycle intermediates. To turn them into amino acids the alpha keto-acids formed from the citric acid cycle intermediates have to acquire their amino groups from glutamate in a transamination reaction, in which pyridoxal phosphate is a cofactor. In this reaction the glutamate is converted into "alpha-ketoglutarate", which is a citric acid cycle intermediate. The intermediates that can provide the carbon skeletons for amino acid synthesis are "oxaloacetate" which forms aspartate and asparagine; and "alpha-ketoglutarate" which forms glutamine, proline, and arginine. Of these amino acids, aspartate and glutamine are used, together with carbon and nitrogen atoms from other sources, to form the purines that are used as the bases in DNA and RNA, as well as in ATP, AMP, GTP, NAD, FAD and CoA. The pyrimidines are partly assembled from aspartate (derived from "oxaloacetate"). The pyrimidines, thymine, cytosine and uracil, form the complementary bases to the purine bases in DNA and RNA, and are also components of CTP, UMP, UDP and UTP. The majority of the carbon atoms in the porphyrins come from the citric acid cycle intermediate, "succinyl-CoA". These molecules are an important component of the hemoproteins, such as hemoglobin, myoglobin and various cytochromes. During gluconeogenesis mitochondrial "oxaloacetate" is reduced to "malate" which is then transported out of the mitochondrion, to be oxidized back to oxaloacetate in the cytosol. Cytosolic oxaloacetate is then decarboxylated to phosphoenolpyruvate by phosphoenolpyruvate carboxykinase, which is the rate limiting step in the conversion of nearly all the gluconeogenic precursors (such as the glucogenic amino acids and lactate) into glucose by the liver and kidney. Because the citric acid cycle is involved in both catabolic and anabolic processes, it is known as an amphibolic pathway. Evan M.W.Duo Glucose feeds the TCA cycle via circulating lactate. The metabolic role of lactate is well recognized as a fuel for tissues, mitochondrial cytopathies such as DPH Cytopathy, and the scientific field of oncology (tumors). In the classical Cori cycle, muscles produce lactate which is then taken up by the liver for gluconeogenesis. New studies suggest that lactate can be used as a source of carbon for the TCA cycle. Evolution. It is believed that components of the citric acid cycle were derived from anaerobic bacteria, and that the TCA cycle itself may have evolved more than once. It may even predate biosis: the substrates appear to undergo most of the reactions spontaneously in the presence of persulfate radicals. Alternatively, of the citric acid cycle could have beginnings in the interstellar medium. Theoretically, several alternatives to the TCA cycle exist; however, the TCA cycle appears to be the most efficient. If several TCA alternatives had evolved independently, they all appear to have converged to the TCA cycle.
6821
36336112
https://en.wikipedia.org/wiki?curid=6821
Military engineering vehicle
A military engineering vehicle is a vehicle built for construction work or for the transportation of combat engineers on the battlefield. These vehicles may be modified civilian equipment (such as the armoured bulldozers that many nations field) or purpose-built military vehicles (such as the AVRE). The first appearance of such vehicles coincided with the appearance of the first tanks, these vehicles were modified Mark V tanks for bridging and mine clearance. Modern "military engineering vehicles" are expected to fulfill numerous roles such as; bulldozer, crane, grader, excavator, dump truck, breaching vehicle, bridging vehicle, military ferry, amphibious crossing vehicle, and combat engineer section carrier. History. World War I. A Heavy RE tank was developed shortly after World War I by Major Giffard LeQuesne Martel RE. This vehicle was a modified Mark V tank. Two support functions for these Engineer Tanks were developed: bridging and mine clearance. The bridging component involved an assault bridge, designed by Major Charles Inglis RE, called the Canal Lock Bridge, which had sufficient length to span a canal lock. Major Martel mated the bridge with the tank and used hydraulic power generated by the tank's engine to maneuver the bridge into place. For mine clearance the tanks were equipped with 2 ton rollers. 1918-1939. Between the wars various experimental bridging tanks were used to test a series of methods for bridging obstacles and developed by the Experimental Bridging Establishment (EBE). Captain SG Galpin RE conceived a prototype Light Tank Mk V to test the Scissors Assault Bridge. This concept was realised by Captain SA Stewart RE with significant input from a Mr DM Delany, a scientific civil servant in the employ of the EBE. MB Wild & Co, Birmingham, also developed a bridge that could span gaps of 26 feet using a complex system of steel wire ropes and a traveling jib, where the front section was projected and then attached to the rear section prior to launching the bridge. This system had to be abandoned due to lack of success in getting it to work, however the idea was later used successfully on the Beaver Bridge Laying Tank. Early World War II. Once World War II had begun, the development of armoured vehicles for use by engineers in the field was accelerated under Delaney's direction. The EBE rapidly developed an assault bridge carried on a modified Covenanter tank capable of deploying a 24-ton tracked load capacity bridge (Class 24) that could span gaps of 30 feet. However, it did not see service in the British armed forces, and all vehicles were passed onto Allied forces such as Australia and Czechoslovakia. A Class 30 design superseded the Class 24 with no real re-design, simply the substitution of the Covenanter tank with a suitably modified Valentine. As tanks in the war got heavier, a new bridge capable of supporting them was developed. A heavily modified Churchill used a single-piece bridge mounted on a turret-less tank and was able to lay the bridge in 90 seconds; this bridge was able to carry a 60-ton tracked or 40-ton wheeled load. Late World War II: Hobart's 'Funnies' and D-Day. Hobart's Funnies were a number of unusually modified tanks operated during the Second World War by the 79th Armoured Division of the British Army or by specialists from the Royal Engineers. They were designed in light of problems that more standard tanks experienced during the amphibious Dieppe Raid, so that the new models would be able to overcome the problems of the planned Invasion of Normandy. These tanks played a major part on the Commonwealth beaches during the landings. They were forerunners of the modern combat engineering vehicle and were named after their commander, Major General Percy Hobart. Hobart's unusual, specialized tanks, nicknamed "funnies", included: In U.S. Forces, Sherman tanks were also fitted with dozer blades, and anti-mine roller devices were developed, enabling engineering operations and providing similar capabilities. Post war. Post war, the value of the combat engineering vehicles had been proven, and armoured multi-role engineering vehicles have been added to the majority of armoured forces. Types. Civilian and militarized heavy equipment. Military engineering can employ a wide variety of heavy equipment in the same ways to how this equipment is used outside the military. Bulldozers, cranes, graders, excavators, dump trucks, loaders, and backhoes all see extensive use by military engineers. Military engineers may also use civilian heavy equipment which was modified for military applications. Typically, this involves adding armour for protection from battlefield hazards such as artillery, unexploded ordnance, mines, and small arms fire. Often this protection is provided by armour plates and steel jackets. Some examples of armoured civilian heavy equipment are the IDF Caterpillar D9, American D7 TPK, Canadian D6 armoured bulldozer, cranes, graders, excavators, and M35 2-1/2 ton cargo truck. Militarized heavy equipment may also take on the form of traditional civilian equipment designed and built to unique military specifications. These vehicles typically sacrifice some depth of capability from civilian models in order to gain greater speed and independence from prime movers. Examples of this type of vehicle include high speed backhoes such as the Australian Army's High Mobility Engineering Vehicle (HMEV) from Thales or the Canadian Army's Multi-Purpose Engineer Vehicle (MPEV) from Arva. "The main article for civilian heavy equipment is:" Heavy equipment (construction) Armoured engineering vehicle. Typically based on the platform of a main battle tank, these vehicles go by different names depending upon the country of use or manufacture. In the US the term "combat engineer vehicle (CEV)" is used, in the UK the terms "Armoured Vehicle Royal Engineers (AVRE)" or Armoured Repair and Recovery Vehicle (ARRV) are used, while in Canada and other commonwealth nations the term "armoured engineer vehicle (AEV)" is used. There is no set template for what such a vehicle will look like, yet likely features include a large dozer blade or mine ploughs, a large caliber demolition cannon, augers, winches, excavator arms and cranes or lifting booms. These vehicles are designed to directly conduct obstacle breaching operations and to conduct other earth-moving and engineering work on the battlefield. Good examples of this type of vehicle include the UK Trojan AVRE, the Russian IMR, and the US M728 Combat Engineer Vehicle. Although the term "armoured engineer vehicle" is used specifically to describe these multi-purpose tank based engineering vehicles, that term is also used more generically in British and Commonwealth militaries to describe all heavy tank based engineering vehicles used in the support of mechanized forces. Thus, "armoured engineer vehicle" used generically would refer to AEV, AVLB, Assault Breachers, and so on. Armoured earth mover. Lighter and less multi-functional than the CEVs or AEVs described above, these vehicles are designed to conduct earth-moving work on the battlefield and generally be anti-tank explosive proof. These vehicles have greater high speed mobility than traditional heavy equipment and are protected against the effects of blast and fragmentation. Good examples are the American M9 ACE and the UK FV180 Combat Engineer Tractor. Breaching vehicle. These vehicles are equipped with mechanical or other means for the breaching of man-made obstacles. Common types of breaching vehicles include mechanical flails, mine plough vehicles, and mine roller vehicles. In some cases, these vehicles will also mount mine-clearing line charges. Breaching vehicles may be either converted armoured fighting vehicles or purpose built vehicles. In larger militaries, converted AFV are likely to be used as "assault breachers" while the breached obstacle is still covered by enemy observation and fire, and then purpose built breaching vehicles will create additional lanes for following forces. Good examples of breaching vehicles include the US M1150 assault breacher vehicle, the UK Aardvark JSFU, and the Singaporean Trailblazer. Bridging vehicles. Several types of military bridging vehicles have been developed. An armoured vehicle-launched bridge (AVLB) is typically a modified tank hull converted to carry a bridge into battle in order to support crossing ditches, small waterways, or other gap obstacles. Another type of bridging vehicle is the truck launched bridge. The Soviet TMM bridging truck could carry and launch a 10-meter bridge that could be daisy-chained with other TMM bridges to cross larger obstacles. More recent developments have seen the conversion of AVLB and truck launched bridge with launching systems that can be mounted on either tank or truck for bridges that are capable of supporting heavy main battle tanks. Earlier examples of bridging vehicles include a type in which a converted tank hull is the bridge. On these vehicles, the hull deck comprises the main portion of the tread way while ramps extend from the front and rear of the vehicle to allow other vehicles to climb over the bridging vehicle and cross obstacles. An example of this type of armoured bridging vehicle was the Churchill Ark used in the Second World War. Combat engineer section carriers. Another type of CELLs are armoured fighting vehicles which are used to transport sappers (combat engineers) and can be fitted with a bulldozer's blade and other mine-breaching devices. They are often used as APCs because of their carrying ability and heavy protection. They are usually armed with machine guns and grenade launchers and usually tracked to provide enough tractive force to push blades and rakes. Some examples are the U.S. M113 APC, IDF Puma, Nagmachon, Husky, and U.S. M1132 ESV (a Stryker variant). Military ferries and amphibious crossing vehicles. One of the major tasks of military engineering is crossing major rivers. Several military engineering vehicles have been developed in various nations to achieve this task. One of the more common types is the amphibious ferry such as the M3 Amphibious Rig. These vehicles are self-propelled on land, they can transform into raft type ferries when in the water, and often multiple vehicles can connect to form larger rafts or floating bridges. Other types of military ferries, such as the Soviet "Plavayushij Transportyor - Srednyj", are able to load while still on land and transport other vehicles cross country and over water. In addition to amphibious crossing vehicles, military engineers may also employ several types of boats. Military assault boats are small boats propelled by oars or an outboard motor and used to ferry dismounted infantry across water. Tank-based combat engineering vehicles. Most CEVs are armoured fighting vehicles that may be based on a tank chassis and have special attachments in order to breach obstacles. Such attachments may include dozer blades, mine rollers, cranes etc. An example of an engineering vehicle of this kind is a bridgelaying tank, which replaces the turret with a segmented hydraulic bridge. The Hobart's Funnies of the Second World War were a wide variety of armoured vehicles for combat engineering tasks. They were allocated to the initial beachhead assaults by the British and Commonwealth forces in the D-Day landings. Churchill tank. The British Churchill tank because of its good cross-country performance and capacious interior with side hatches became the most adapted with modifications, the base unit being the AVRE carrying a large demolition gun.
6822
18474721
https://en.wikipedia.org/wiki?curid=6822
Catalonia
Catalonia is an autonomous community of Spain, designated as a "nationality" by its Statute of Autonomy. Most of its territory (except the Val d'Aran) is situated on the northeast of the Iberian Peninsula, to the south of the Pyrenees mountain range. Catalonia is administratively divided into four provinces or eight "vegueries" (regions), which are in turn divided into 43 "comarques". The capital and largest city, Barcelona, is the second-most populous municipality in Spain and the fifth-most populous urban area in the European Union. Modern-day Catalonia comprises most of the medieval and early modern Principality of Catalonia, with the remainder of the northern area now part of France's Pyrénées-Orientales. It is bordered by France (Occitanie) and Andorra to the north, the Mediterranean Sea to the east, and the Spanish autonomous communities of Aragon to the west and Valencia to the south. In addition to its approximately 580 km of coastline, Catalonia also has major high landforms such as the Pyrenees and the Pre-Pyrenees, the Transversal Range (Serralada Transversal) or the Central Depression. The official languages are Catalan, Spanish, and the Aranese dialect of Occitan. In the 10th century, the County of Barcelona and the other neighboring counties became independent from West Francia. In 1137, Barcelona and the Kingdom of Aragon were united by marriage, resulting in a composite monarchy, the Crown of Aragon. Within the Crown, the Catalan counties merged in to a state, the Principality of Catalonia, with its own distinct institutional system, such as Courts, Generalitat, and constitutions, being the base and promoter for the Crown's Mediterranean trade and expansionism. In the later Middle Ages, Catalan literature flourished. In 1516, Charles V became monarch of both the crowns of Aragon and Castile, retaining their previous distinct institutions and legislation. Growing tensions led to the revolt of the Principality of Catalonia (1640–1652), briefly becoming a republic under French protection. By the Treaty of the Pyrenees (1659), the northern parts of Catalonia were ceded to France. During the War of the Spanish Succession (1701–1714), the states of the Crown of Aragon sided against the Bourbon Philip V of Spain, but following Catalan capitulation on 11 September 1714 he imposed a unifying administration across Spain, enacting the Nueva Planta decrees which ended Catalonia's separate status, suppressing its institutions and legal system. Catalan as a language of government and literature was eclipsed by Spanish. In the 19th century, Napoleonic and Carlist Wars affected Catalonia. In the second third of the century, it experienced industrialisation, as well as a cultural renaissance coupled with incipient nationalism and several workers' movements. The Second Spanish Republic (1931–1939) granted self-governance to Catalonia, restoring the Generalitat as its government. After the Spanish Civil War (1936–1939), the Francoist dictatorship enacted repressive measures, abolishing self-government and banning again the official use of the Catalan language. After a harsh autarky, from the late 1950s Catalonia saw rapid economic growth, drawing many workers from across Spain and making it one of Europe's largest industrial and touristic areas. During the Spanish transition to democracy (1975–1982), the Generalitat and Catalonia's self-government were reestablished, remaining one of the most economically dynamic communities in Spain. In the 2010s, there was growing support for Catalan independence. On 27 October 2017, the Catalan Parliament unilaterally declared independence following a referendum that was deemed unconstitutional by the Spanish state. The Spanish Senate voted in favour of enforcing direct rule by removing the Catalan government and calling a snap regional election. The Spanish Supreme Court imprisoned seven former ministers of the Catalan government on charges of rebellion and misuse of public funds, while several others—including then-President Carles Puigdemont—fled to other European countries. Those in prison were pardoned by the Spanish government in 2021. Etymology and pronunciation. The name "Catalonia" (), spelled "Cathalonia", began to be used for the homeland of the Catalans ("Cathalanenses") in the late 11th century and was probably used before as a territorial reference to the group of counties that comprised part of the March of Gothia and the March of Hispania under the control of the Count of Barcelona and his relatives. The origin of the name "Catalunya" is subject to diverse interpretations because of a lack of evidence. One theory suggests that "Catalunya" derives from the name "Gothia" (or "Gauthia") "Launia" ("Land of the Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, known as "Gothia", whence "Gothland" > > > > "Catalonia" theoretically derived. During the Middle Ages, Byzantine chroniclers claimed that "Catalania" derives from the local medley of Goths with Alans, initially constituting a "Goth-Alania". Other theories suggest: In English, "Catalonia" is pronounced . The native name, "Catalunya", is pronounced in Central Catalan, the most widely spoken variety, and in North-Western Catalan. The Spanish name is "Cataluña" (), and the Aranese name is "Catalonha" (). History. Prehistory. The first known human settlements in what is now Catalonia were at the beginning of the Middle Paleolithic. The oldest known trace of human occupation is a mandible found in Banyoles, described as pre-Neanderthal, that is, some 200,000 years old; other sources suggest it to be only about one third that old. From the Epipalaeolithic or Mesolithic, important remains dated between 8000 and 5000BC, such as those of Sant Gregori (Falset) and el Filador (Margalef de Montsant). The most important sites from these eras, all excavated in the region of Moianès, are the Balma del Gai (Epipaleolithic) and the Balma de l'Espluga. The Neolithic era began in Catalonia around 5000BC, although the population was slower to develop fixed settlements thanks to the abundance of woods, which allowed the continuation of a fundamentally hunter-gatherer culture. An example of such settlements would be La Draga at Banyoles, an "early Neolithic village which dates from the end of the 6th millenniumBC." The Bronze Age occurred between 1800 and 700BC. There were some known settlements in the low Segre zone. The Bronze Age coincided with the arrival of the Indo-Europeans through the Urnfield Culture, whose successive waves of migration began around 1200BC, and they were responsible for the creation of the first proto-urban settlements. Around the middle of the 7th centuryBC, the Iron Age arrived in Catalonia. Pre-Roman and Roman period. In pre-Roman times, the area that is now Catalonia was populated by the Iberians. The Iberians tribes – the Ilergetes, Indigetes and Lacetani (Cerretains) – also maintained relations with the peoples of the Mediterranean. Some urban agglomerations became relevant, including Ilerda (Lleida) inland, Hibera (perhaps Amposta or Tortosa) or Indika (Ullastret). Coastal trading colonies were established by the ancient Greeks, who settled around the Gulf of Roses, in Emporion (Empúries) and Roses in the 8th century BC. After the Carthaginian defeat by the Roman Republic, the north-east of Iberia became the first to come under Roman rule and became part of Hispania, the westernmost part of the Roman Empire. Tarraco (modern Tarragona) was one of the most important Roman cities in Hispania and the capital of the province of Tarraconensis. Other important cities of the Roman period are Ilerda (Lleida), Dertosa (Tortosa), Gerunda (Girona) as well as the ports of Empuriæ (former Emporion) and Barcino (Barcelona). As for the rest of Hispania, Latin law was granted to all cities under the reign of Vespasian (69–79AD), while Roman citizenship was granted to all free men of the empire by the Edict of Caracalla in 212AD (Tarraco, the capital, was already a colony of Roman law since 45BC). It was a rich agricultural province (olive oil, wine, wheat), and the first centuries of the Empire saw the construction of roads (the most important being the Via Augusta, parallel to Mediterranean coastline) and infrastructure like aqueducts. Conversion to Christianity, attested in the 3rdcentury, was completed in urban areas in the 4thcentury. Although Hispania remained under Roman rule and did not fall under the rule of Vandals, Suebi and Alans in the 5thcentury, the main cities suffered frequent sacking and some deurbanization. Middle Ages. After the fall of the Western Roman Empire, the area was conquered by the Visigoths and was ruled as part of the Visigothic Kingdom for almost two and a half centuries. In 718, it came under Muslim control and became part of Al-Andalus, a province of the Umayyad Caliphate. From the conquest of Roussillon in 760, to the conquest of Barcelona in 801, the Frankish empire took control of the area between Septimania and the Llobregat river from the Muslims and created heavily militarised, self-governing counties. These counties formed part of the historiographically known as the Gothic and Hispanic Marches, a buffer zone in the south of the Frankish Empire in the northeast of the Iberian Peninsula, to act as a defensive barrier against further invasions from Al-Andalus. These counties came under the rule of the counts of Barcelona, who were Frankish vassals nominated by the emperor of the Franks, to whom they were feudatories (801–988). At the end of the 9thcentury, the Count of Barcelona Wilfred the Hairy (878–897) made his titles hereditaries and thus founded the dynasty of the House of Barcelona, which reigned in Catalonia until 1410. In 988 Borrell II, Count of Barcelona, did not recognise the new French king Hugh Capet as his king, evidencing the loss of dependency from Frankish rule and confirming his successors (from Ramon Borrell I onwards) as independent of the Capetian crown. At the beginning of eleventh century the Catalan counties experienced an important process of feudalisation, however, the efforts of church's sponsored Peace and Truce Assemblies and the intervention of Ramon Berenguer I, count of Barcelona (1035–1076) in the negotiations with the rebel nobility resulted in the partial restoration of the comital authority under the new feudal order. To fulfill that purpose, Ramon Berenguer began the modification of the legislation in the written Usages of Barcelona, being one of the first European compilations of feudal law. The earliest known use of the name "Catalonia" for these counties dates to 1117. In 1137, Ramon Berenguer IV, Count of Barcelona decided to accept King Ramiro II of Aragon's proposal to receive the Kingdom of Aragon and to marry his daughter Petronila, establishing the dynastic union of the County of Barcelona with Aragon, creating a composite monarchy later known as the Crown of Aragon and making the Catalan counties that were vassalized or merged with the County of Barcelona into a principality of the Aragonese Crown. During the reign of his son Alphons, in 1173, Catalonia was regarded as a legal entity for the first time, while the Usages of Barcelona were compiled in the process to turn them into the law and custom of Catalonia ("Consuetudinem Cathalonie"), being considered one of the "milestones of Catalan political identity". In 1258, by means of the Treaty of Corbeil James I of Aragon renounced his family rights and dominions in Occitania, while the king of France, Louis IX, formally relinquished to any historical claim of feudal lordship he might have over the Catalan counties. This treaty confirmed, from French point of view, the independence of the Catalan counties already established the previous three centuries. As a coastal land, Catalonia became the base of the Aragonese Crown's maritime forces, which spread the power of the Crown in the Mediterranean, turning Barcelona into a powerful and wealthy city. In the period of 1164–1410, new territories, the Kingdom of Valencia, the Kingdom of Majorca, the Kingdom of Sardinia, the Kingdom of Sicily, and, briefly, the Duchies of Athens and Neopatras, were incorporated into the dynastic domains of the House of Aragon. The expansion was accompanied by a great development of the Catalan trade, creating an extensive trade network across the Mediterranean which competed with those of the maritime republics of Genoa and Venice. At the same time, the Principality of Catalonia developed a complex institutional and political system based in the concept of a pact between the estates of the realm and the king. The legislation had to be passed by the Catalan Courts ("Corts Catalanes"), one of the first parliamentary bodies of Europe that, after 1283, officially obtained the power to pass legislation with the monarch. The Courts were composed of the three estates organized into "arms" ("braços"), were presided over by the monarch, and approved the Catalan constitutions, which established a compilation of rights for the inhabitants of the Principality. In order to collect general taxes, the Catalan Courts of 1359 established a permanent representative body, known as the Generalitat, which gained considerable political power over the next centuries. The domains of the Aragonese Crown were severely affected by the Black Death pandemic and by later outbreaks of the plague. Between 1347 and 1497 Catalonia lost 37percent of its population. In 1410, the last reigning monarch of the House of Barcelona, King Martin I died without surviving descendants. Under the Compromise of Caspe (1412), the representatives of the kingdoms of Aragon, Valencia and the Principality of Catalonia appointed Ferdinand from the Castilian House of Trastámara as King of the Crown of Aragon. During the reign of his son, John II, the persistent economic crisis and social and political tensions in the Principality led to the Catalan Civil War (1462–1472) and the War of the Remences (1462–1486) that left Catalonia exhausted. The Sentencia Arbitral de Guadalupe (1486) liberated the remença peasants from the feudal evil customs. In the later Middle Ages, Catalan literature flourished in Catalonia proper and in the kingdoms of Majorca and Valencia, with such remarkable authors as the philosopher Ramon Llull, the Valencian poet Ausiàs March, and Joanot Martorell, author of the novel "Tirant lo Blanch", published in 1490. Modern era. Ferdinand II of Aragon, the grandson of Ferdinand I, and Queen Isabella I of Castile were married in 1469, later taking the title the Catholic Monarchs; subsequently, this event was seen by historiographers as the dawn of a unified Spain. At this time, though united by marriage, the Crowns of Castile and Aragon maintained distinct territories, each keeping its own traditional institutions, parliaments, laws and currency. Castile commissioned expeditions to the Americas and benefited from the riches acquired in the Spanish colonisation of the Americas, but, in time, also carried the main burden of military expenses of the united Spanish kingdoms. After Isabella's death, Ferdinand II personally ruled both crowns. By virtue of descent from his maternal grandparents, Ferdinand and Isabella, in 1516 Charles I of Spain became the first king to rule the Crowns of Castile and Aragon simultaneously by his own right. Following the death of his paternal (House of Habsburg) grandfather, Maximilian I, Holy Roman Emperor, he was also elected Charles V, Holy Roman Emperor, in 1519. Over the next few centuries, the Principality of Catalonia was generally on the losing side of a series of wars that led steadily to an increased centralization of power in Spain. However, between the 16th and 18th centuries, the participation of the political community in the local and the general Catalan government grew (thus consolidating its constitutional system), while the kings remained absent, represented by a viceroy. Tensions between Catalan institutions and the monarchy began to arise. The large and burdensome presence of the Spanish royal army in the Principality due to the Franco-Spanish War led to an uprising of peasants, provoking the Reapers' War (1640–1652), which saw Catalonia rebel (briefly as a republic led by the president of the Generalitat, Pau Claris) with French help against the Spanish Crown for overstepping Catalonia's rights during the Thirty Years' War. Within a brief period France took full control of Catalonia. Most of Catalonia was reconquered by the Spanish monarchy but Catalan rights were mostly recognised. Roussillon and half of Cerdanya was lost to France by the Treaty of the Pyrenees (1659). The most significant conflict concerning the governing monarchy was the War of the Spanish Succession (1701–1715), which began when the childless Charles II of Spain, the last Spanish Habsburg, died without an heir in 1700. Charles II had chosen Philip V of Spain from the French House of Bourbon. Catalonia, like other territories that formed the Crown of Aragon, rose up in support of the Austrian Habsburg pretender Charles VI, Holy Roman Emperor, in his claim for the Spanish throne as Charles III of Spain. The fight between the houses of Bourbon and Habsburg for the Spanish Crown split Spain and Europe. The fall of Barcelona on 11 September 1714 to the Bourbon king Philip V militarily ended the Habsburg claim to the Spanish Crown, which became legal fact in the Treaty of Utrecht (1713). Philip felt that he had been betrayed by the Catalan Courts, as it had initially sworn its loyalty to him when he had presided over it in 1701. In retaliation for the betrayal, and inspired by the French model, the first Bourbon king enacted the Nueva Planta decrees (1707, 1715 and 1716), incorporating the realms of the Crown of Aragon, including the Principality of Catalonia in 1716, as provinces of the Crown of Castile, terminating their status as separate states along with their parliaments, institutions and public laws, as well as their politics, within a French-style centralized and absolutist kingdom of Spain. After the War of the Spanish Succession, the assimilation of the Crown of Aragon in the Castilian Crown through the Nueva Planta Decrees was the first step in the creation of the Spanish nation state. These nationalist policies, sometimes aggressive, and still in force, have been and are the seed of repeated territorial conflicts within the state. In the second half of the 17th century and the 18th century (excluding the parentesis of the Succession War and the post-war inestability) Catalonia carried out a successful process of economic growth and proto-industrialization, reinforced in the late quarter of the century when Castile's trade monopoly with American colonies ended. Late modern history. At the beginning of the nineteenth century, Catalonia was severely affected by the Napoleonic Wars. In 1808, it was occupied by French troops; the resistance against the occupation eventually developed into the Peninsular War. The rejection of French dominion was institutionalized with the creation of "juntas" (councils) who, remaining loyal to the Bourbons, exercised the sovereignty and representation of the territory due to the disappearance of the old institutions. In 1810, Napoleon took direct control of Catalonia, creating the Government of Catalonia under the rule of Marshall Augereau, and making Catalan briefly an official language again. Between 1812 and 1814, Catalonia was annexed to France. The French troops evacuated Catalan territory at the end of 1814. After the Bourbon restoration in Spain and the death of the absolutist king Ferdinand VII (1833), Carlist Wars erupted against the newly established liberal state of Isabella II. Catalonia was divided, with the coastal and most industrialized areas supporting liberalism, while most of the countryside were in the hands of the Carlist faction; the latter proposed to reestablish the institutional systems suppressed by the Nueva Planta decrees in the ancient realms of the Crown of Aragon. The consolidation of the liberal state saw a new provincial division of Spain, including Catalonia, which was divided into four provinces (Barcelona, Girona, Lleida and Tarragona). In the second third of the 19thcentury, Catalonia became an important industrial center, particularly focused on textiles. This process was a consequence of the conditions of proto-industrialisation of textile production in the prior two centuries, growing capital from wine and brandy export, and was later boosted by the government support for domestic manufacturing. In 1832, the Bonaplata Factory in Barcelona became the first factory in the country to make use of the steam engine. The first railway on the Iberian Peninsula was built between Barcelona and Mataró in 1848. A policy to encourage company towns also saw the textile industry flourish in the countryside in the 1860s and 1870s. Although the policy of Spanish governments oscillated between free trade and protectionism, become more common. To this day Catalonia remains one of the most industrialised areas of Spain. In the same period, Barcelona was the focus of industrial conflict and revolutionary uprisings known as "bullangues". In Catalonia, a republican current began to develop among the progressives, attrackting many Catalans who favored the federalisation of Spain. Meanwhile, the Catalan language saw a Romantic cultural renaissance from the second third of the century onwards, the "Renaixença", among both the working class and the bourgeoisie. Right after the fall of the First Spanish Republic (1873–1874) and the subsequent restoration of the Bourbon dynasty (1874), Catalan nationalism began to be organized politically under the leadership of the republican federalist Valentí Almirall. The anarchist movement had been active throughout the last quarter of the 19th century and the early 20th century, founding the CNT trade union in 1910 and achieving one of the first eight-hour workdays in Europe in 1919. Growing resentment of conscription and of the military culminated in the Tragic Week (Catalan: "Setmana Tràgica") in Barcelona in 1909. Under the hegemony of the Regionalist League, Catalonia gained a degree of administrative unity for the first time in the Modern era. In 1914, the four Catalan provinces were authorized to create a commonwealth (Catalan: "Mancomunitat"), lacking legislative power or political autonomy, which carried out an ambitious program of modernization, but it was disbanded in 1925 by the dictatorship of Primo de Rivera (1923–1930). During the final stage of the Dictatorship, with Spain beginning to suffer an economic crisis, Barcelona hosted the 1929 International Exposition. After the fall of the dictatorship and a brief proclamation of the Catalan Republic, during the events of the proclamation of the Second Spanish Republic (14–17April1931), Catalonia received, in 1932, its first Statute of Autonomy from the Spanish Republic's Parliament, granting it a considerable degree of self-governance, establishing an autonomous body, the Generalitat of Catalonia, which included a parliament. The left-wing pro-independence leader Francesc Macià was appointed its first president. Under the Statute, Catalan became an official language. The governments of the Republican Generalitat, led by the Republican Left of Catalonia (ERC) leaders Francesc Macià (1931–1933) and Lluís Companys (1933–1940), sought to implement a modernizing and progressive social agenda, despite the internal difficulties. This period was marked by political unrest, the effects of the economic crisis and their social repercussions. The Statute of Autonomy was suspended in 1934, due to the Events of 6 October in Barcelona, after the accession of right-wing Spanish nationalist party CEDA to the government of the Republic, considered close to fascism. After the electoral victory of the left wing Popular Front in February 1936, the Government of Catalonia was pardoned and the self-government was restored. Spanish Civil War (1936–1939) and Franco's rule (1939–1975). The defeat of the military rebellion against the Republican government in Barcelona placed Catalonia firmly in the Republican side of the Spanish Civil War. During the war, there were two rival powers in Catalonia: the de jure power of the Generalitat and the de facto power of the armed popular militias. Violent confrontations between the workers' parties (CNT-FAI and POUM against the PSUC) culminated in the defeat of the former in 1937. The situation resolved itself progressively in favor of the Generalitat, but at the same time the Generalitat lost most of its autonomous powers within Republican Spain. In 1938 Franco's troops broke the Republican territory in two, isolating Catalonia from the rest of the Republican territory. The defeat of the Republican army in the Battle of the Ebro led in 1938 and 1939 to the occupation of Catalonia by Franco's forces. The defeat of the Spanish Republic in the Spanish Civil War brought to power the dictatorship of Francisco Franco, whose first ten-year rule was particularly violent, autocratic, and repressive both in a political, cultural, social, and economical sense. In Catalonia, any kind of public activities associated with Catalan nationalism, republicanism, anarchism, socialism, liberalism, democracy or communism, including the publication of books on those subjects or simply discussion of them in open meetings, was banned. Franco's regime banned the use of Catalan in government-run institutions and during public events, and the Catalan institutions of self-government were abolished. The president of Catalonia, Lluís Companys, was taken to Spain from his exile in the German-occupied France and was tortured and executed in the Montjuïc Castle of Barcelona for the crime of 'military rebellion'. During later stages of Francoist Spain, certain folkloric and religious celebrations in Catalan resumed and were tolerated. Use of Catalan in the mass media had been forbidden but was permitted from the early 1950s in the theatre. Despite the ban during the first years and the difficulties of the next period, publishing in Catalan continued throughout his rule. The years after the war were extremely hard. Catalonia, like many other parts of Spain, had been devastated by the war. Recovery from the war damage was slow and made more difficult by the international trade embargo and the autarkic politics of Franco's regime. By the late 1950s, the region had recovered its pre-war economic levels and in the 1960s was the second-fastest growing economy in the world in what became known as the Spanish miracle. During this period there was a spectacular growth of industry and tourism in Catalonia that drew large numbers of workers to the region from across Spain and made the area around Barcelona one of Europe's largest industrial metropolitan areas. Transition and democratic period (1975–"present"). After Franco's death in 1975, Catalonia voted for the adoption of a democratic Spanish Constitution in 1978, in which Catalonia recovered political and cultural autonomy, restoring the Generalitat (exiled since the end of the Civil War in 1939) in 1977 and adopting a new Statute of Autonomy in 1979, which defined Catalonia as a "nationality". The first elections to the Parliament of Catalonia under this Statute gave the Catalan presidency to Jordi Pujol, leader of Convergència i Unió (CiU), a center-right Catalan nationalist electoral coalition, with Pujol re-elected until 2003. Throughout the 1980s and 1990s, the institutions of Catalan autonomy were deployed, among them an autonomous police force, the Mossos d'Esquadra, in 1983, and the broadcasting network Televisió de Catalunya and its first channel TV3, created in 1983. An extensive program of normalization of Catalan language was carried out. Today, Catalonia remains one of the most economically dynamic communities of Spain. The Catalan capital and largest city, Barcelona, is a major international cultural centre and a major tourist destination. In 1992, Barcelona hosted the Summer Olympic Games. Independence movement. In November 2003, elections to the Parliament of Catalonia gave the government to a left-wing Catalanist coalition formed by the Socialists' Party of Catalonia (PSC-PSOE), Republican Left of Catalonia (ERC) and Initiative for Catalonia Greens (ICV), and the socialist Pasqual Maragall was appointed president. The new government prepared a bill for a new Statute of Autonomy, with the aim of consolidate and expand self-government. The new Statute of Autonomy of Catalonia, approved after a referendum in 2006, was contested by important sectors of the Spanish society, especially by the conservative People's Party, which sent the law to the Constitutional Court of Spain. In 2010, the Court declared non-valid some of the articles that established an autonomous Catalan system of Justice, improved financing, a new territorial division, the status of Catalan language or the symbolical declaration of Catalonia as a nation. This decision was severely contested by large sectors of Catalan society, which increased the demands of independence. A controversial independence referendum was held in Catalonia on 1 October 2017, using a disputed voting process. It was declared illegal and suspended by the Constitutional Court of Spain, because it breached the 1978 Constitution. Subsequent developments saw, on 27 October 2017, a symbolic declaration of independence by the Parliament of Catalonia, the enforcement of direct rule by the Spanish government through the use of Article 155 of the Constitution, the dismissal of the Executive Council and the dissolution of the Parliament, with a snap regional election called for 21 December 2017, which ended with a victory of pro-independence parties. Former President Carles Puigdemont and five former cabinet ministers fled Spain and took refuge in other European countries (such as Belgium, in Puigdemont's case), whereas nine other cabinet members, including vice-president Oriol Junqueras, were sentenced to prison under various charges of rebellion, sedition, and misuse of public funds. Quim Torra became the 131st President of the Government of Catalonia on 17 May 2018, after the Spanish courts blocked three other candidates. In 2018, the Assemblea Nacional Catalana joined the Unrepresented Nations and Peoples Organization (UNPO) on behalf of Catalonia. On 14 October 2019, the Spanish Supreme court sentenced several Catalan political leaders, involved in organizing a referendum on Catalonia's independence from Spain, and convicted them on charges ranging from sedition to misuse of public funds, with sentences ranging from 9 to 13 years in prison. This decision sparked demonstrations around Catalonia. They were later pardoned by the Spanish government and left prison in June 2021. In the early-to-mid 2020s support for independence declined. Geography. Climate. The climate of Catalonia is diverse. The populated areas lying by the coast in Tarragona, Barcelona and Girona provinces feature a Hot-summer Mediterranean climate (Köppen "Csa"). The inland part (including the Lleida province and the inner part of Barcelona province) show a mostly Mediterranean climate (Köppen "Csa"). The Pyrenean peaks have a continental (Köppen "D") or even Alpine climate (Köppen "ET") at the highest summits, while the valleys have a maritime or oceanic climate sub-type (Köppen "Cfb"). In the Mediterranean area, summers are dry and hot with sea breezes, and the maximum temperature is around . Winter is cool or slightly cold depending on the location. It snows frequently in the Pyrenees, and it occasionally snows at lower altitudes, even by the coastline. Spring and autumn are typically the rainiest seasons, except for the Pyrenean valleys, where summer is typically stormy. The inland part of Catalonia is hotter and drier in summer. Temperature may reach , some days even . Nights are cooler there than at the coast, with the temperature of around . Fog is not uncommon in valleys and plains; it can be especially persistent, with freezing drizzle episodes and subzero temperatures during winter, mainly along the Ebro and Segre valleys and in Plain of Vic. Topography. Catalonia has a marked geographical diversity, considering the relatively small size of its territory. The geography is conditioned by the Mediterranean coast, with of coastline, and the towering Pyrenees along the long northern border. Catalonia is divided into three main geomorphological units: The Catalan Pyrenees represent almost half in length of the Pyrenees, as it extends more than . Traditionally differentiated the Axial Pyrenees (the main part) and the Pre-Pyrenees (southern from the Axial) which are mountainous formations parallel to the main mountain ranges but with lower altitudes, less steep and a different geological formation. The highest mountain of Catalonia, located north of the comarca of Pallars Sobirà is the Pica d'Estats (3,143m), followed by the Puigpedrós (2,914m). The Serra del Cadí comprises the highest peaks in the Pre-Pyrenees and forms the southern boundary of the Cerdanya valley. The Central Catalan Depression is a plain located between the Pyrenees and Pre-Coastal Mountains. Elevation ranges from . The plains and the water that descend from the Pyrenees have made it fertile territory for agriculture and numerous irrigation canals have been built. Another major plain is the Empordà, located in the northeast. The Catalan Mediterranean system is based on two ranges running roughly parallel to the coast (southwest–northeast), called the Coastal and the Pre-Coastal Ranges. The Coastal Range is both the shorter and the lower of the two, while the Pre-Coastal is greater in both length and elevation. Areas within the Pre-Coastal Range include Montserrat, Montseny and the Ports de Tortosa-Beseit. Lowlands alternate with the Coastal and Pre-Coastal Ranges. The Coastal Lowland is located to the East of the Coastal Range between it and the coast, while the Pre-Coastal Lowlands are located inland, between the Coastal and Pre-Coastal Ranges, and includes the Vallès and Penedès plains. Flora and fauna. Catalonia is a showcase of European landscapes on a small scale. Just over hosting a variety of substrates, soils, climates, directions, altitudes and distances to the sea. The area is of great ecological diversity and a remarkable wealth of landscapes, habitats and species. The fauna of Catalonia comprises a minority of animals endemic to the region and a majority of non-endemic animals. Much of Catalonia enjoys a Mediterranean climate (except mountain areas), which makes many of the animals that live there adapted to Mediterranean ecosystems. Of mammals, there are plentiful wild boar, red foxes, as well as roe deer and in the Pyrenees, the Pyrenean chamois. Other large species such as the bear have been recently reintroduced. The waters of the Balearic Sea are rich in biodiversity, and even the megafaunas of the oceans; various types of whales (such as fin, sperm, and pilot) and dolphins can be found in the area. Hydrography. Most of Catalonia belongs to the Mediterranean Basin. The Catalan hydrographic network consists of two important basins, that of the Ebro and the one that comprises the internal basins of Catalonia (respectively covering 46.84% and 51.43% of the territory), all of them flow to the Mediterranean. Furthermore, there is the Garona river basin that flows to the Atlantic Ocean, but it only covers 1.73% of the Catalan territory. The hydrographic network can be divided in two sectors, an occidental slope or Ebro river slope and one oriental slope constituted by minor rivers that flow to the Mediterranean along the Catalan coast. The first slope provides an average of per year, while the second only provides an average of /year. The difference is due to the big contribution of the Ebro river, from which the Segre is an important tributary. Moreover, in Catalonia there is a relative wealth of groundwaters, although there is inequality between "comarques", given the complex geological structure of the territory. In the Pyrenees there are many small lakes, remnants of the ice age. The biggest are the lake of Banyoles and the recently recovered lake of Ivars. The Catalan coast is almost rectilinear, with a length of and few landforms—the most relevant are the Cap de Creus and the Gulf of Roses to the north and the Ebro Delta to the south. The Catalan Coastal Range hugs the coastline, and it is split into two segments, one between L'Estartit and the town of Blanes (the Costa Brava), and the other at the south, at the Costes del Garraf. The principal rivers in Catalonia are the Ter, Llobregat, and the Ebro (Catalan: ), all of which run into the Mediterranean. Anthropic pressure and protection of nature. The majority of Catalan population is concentrated in 30% of the territory, mainly in the coastal plains. Intensive agriculture, livestock farming and industrial activities have been accompanied by a massive tourist influx (more than 20million annual visitors), a rate of urbanization and even of major metropolisation which has led to a strong urban sprawl: two thirds of Catalans live in the urban area of Barcelona, while the proportion of urban land increased from 4.2% in 1993 to 6.2% in 2009, a growth of 48.6% in sixteen years, complemented with a dense network of transport infrastructure. This is accompanied by a certain agricultural abandonment (decrease of 15% of all areas cultivated in Catalonia between 1993 and 2009) and a global threat to natural environment. Human activities have also put some animal species at risk, or even led to their disappearance from the territory, like the gray wolf and probably the brown bear of the Pyrenees. The pressure created by this model of life means that the country's ecological footprint exceeds its administrative area. Faced with these problems, Catalan authorities initiated several measures whose purpose is to protect natural ecosystems. Thus, in 1990, the Catalan government created the Nature Conservation Council (Catalan: ), an advisory body with the aim to study, protect and manage the natural environments and landscapes of Catalonia. In addition, the Generalitat has carried out the Plan of Spaces of Natural Interest ( or PEIN) in 1992 while eighteen Natural Spaces of Special Protection ( or ENPE) have been instituted. There is a National Park, Aigüestortes i Estany de Sant Maurici; fourteen Natural Parks, Alt Pirineu, Aiguamolls de l'Empordà, Cadí-Moixeró, Cap de Creus, Sources of Ter and Freser, Collserola, Ebro Delta, Ports, Montgrí, Medes Islands and Baix Ter, Montseny, Montserrat, Sant Llorenç del Munt and l'Obac, Serra de Montsant, and the Garrotxa Volcanic Zone; as well as three Natural Places of National Interest ( or PNIN), the Pedraforca, the Poblet Forest and the Albères. Politics. After Franco's death in 1975 and the adoption of a democratic constitution in Spain in 1978, Catalonia recovered and extended the powers that it had gained in the Statute of Autonomy of 1932 but lost with the fall of the Second Spanish Republic at the end of the Spanish Civil War in 1939. This autonomous community has gradually achieved more autonomy since the approval of the Spanish Constitution of 1978. The Generalitat holds exclusive jurisdiction in education, health, culture, environment, communications, transportation, commerce, public safety and local government, and only shares jurisdiction with the Spanish government in justice. In all, some analysts argue that formally the current system grants Catalonia with "more self-government than almost any other corner in Europe". The support for Catalan nationalism ranges from a demand for further autonomy and the federalisation of Spain to the desire for independence from the rest of Spain, expressed by Catalan independentists. The first survey following the Constitutional Court ruling that cut back elements of the 2006 Statute of Autonomy, published by "La Vanguardia" on 18July2010, found that 46% of the voters would support independence in a referendum. In February of the same year, a poll by the Open University of Catalonia gave more or less the same results. Other polls have shown lower support for independence, ranging from 40 to 49%. Although it is established in the whole of the territory, support for independence is significantly higher in the hinterland and the northeast, away from the more populous coastal areas such as Barcelona. Since 2011 when the question started to be regularly surveyed by the governmental Center for Public Opinion Studies (CEO), support for Catalan independence has been on the rise. According to the CEO opinion poll from July2016, 47.7% of Catalans would vote for independence and 42.4% against it while, about the question of preferences, according to the CEO opinion poll from March 2016, a 57.2 claim to be "absolutely" or "fairly" in favour of independence. Other polls have shown lower support for independence, ranging from 40 to 49%. Other polls show more variable results, according with the Spanish CIS, as of December2016, 47% of Catalans rejected independence and 45% supported it. In hundreds of non-binding local referendums on independence, organised across Catalonia from 13September2009, a large majority voted for independence, although critics argued that the polls were mostly held in pro-independence areas. In December2009, 94% of those voting backed independence from Spain, on a turn-out of 25%. The final local referendum was held in Barcelona, in April2011. On 11September2012, a pro-independence march pulled in a crowd of between 600,000 (according to the Spanish Government), 1.5million (according to the Guàrdia Urbana de Barcelona), and 2million (according to its promoters); whereas poll results revealed that half the population of Catalonia supported secession from Spain. Two major factors were Spain's Constitutional Court's 2010 decision to declare part of the 2006 Statute of Autonomy of Catalonia unconstitutional, as well as the fact that Catalonia contributes 19.49% of the central government's tax revenue, but only receives 14.03% of central government's spending. Parties that consider themselves either Catalan nationalist or independentist have been present in all Catalan governments since 1980. The largest Catalan nationalist party, Convergence and Union, ruled Catalonia from 1980 to 2003, and returned to power in the 2010 election. Between 2003 and 2010, a leftist coalition, composed by the Catalan Socialists' Party, the pro-independence Republican Left of Catalonia and the leftist-environmentalist Initiative for Catalonia-Greens, implemented policies that widened Catalan autonomy. In the 25 November 2012 Catalan parliamentary election, sovereigntist parties supporting a secession referendum gathered 59.01% of the votes and held 87 of the 135seats in the Catalan Parliament. Parties supporting independence from the rest of Spain obtained 49.12% of the votes and a majority of 74seats. Artur Mas, then the president of Catalonia, organised early elections that took place on 27September2015. In these elections, Convergència and Esquerra Republicana decided to join, and they presented themselves under the coalition named Junts pel Sí (in Catalan, Together for Yes). Junts pel Sí won 62seats and was the most voted party, and CUP (Candidatura d'Unitat Popular, a far-left and independentist party) won another 10, so the sum of all the independentist forces/parties was 72seats, reaching an absolute majority, but not in number of individual votes, comprising 47,74% of the total. Statute of Autonomy. The Statute of Autonomy of Catalonia is the fundamental organic law, second only to the Spanish Constitution from which the Statute originates. In the Spanish Constitution of 1978 Catalonia, along with the Basque Country and Galicia, was defined as a "nationality". The same constitution gave Catalonia the automatic right to autonomy, which resulted in the Statute of Autonomy of Catalonia of 1979. Both the 1979 Statute of Autonomy and the current one, approved in 2006, state that "Catalonia, as a nationality, exercises its self-government constituted as an Autonomous Community in accordance with the Constitution and with the Statute of Autonomy of Catalonia, which is its basic institutional law, always under the law in Spain". The Preamble of the 2006 Statute of Autonomy of Catalonia states that the Parliament of Catalonia has defined Catalonia as a nation, but that "the Spanish Constitution recognizes Catalonia's national reality as a nationality". While the Statute was approved by and sanctioned by both the Catalan and Spanish parliaments, and later by referendum in Catalonia, it has been subject to a legal challenge by the surrounding autonomous communities of Aragon, Balearic Islands and Valencia, as well as by the conservative People's Party. The objections are based on various issues such as disputed cultural heritage but, especially, on the Statute's alleged breaches of the principle of "solidarity between regions" in fiscal and educational matters enshrined by the Constitution. Spain's Constitutional Court assessed the disputed articles and on 28 June 2010, issued its judgment on the principal allegation of unconstitutionality presented by the People's Party in 2006. The judgment granted clear passage to 182 articles of the 223 that make up the fundamental text. The court approved 73 of the 114 articles that the People's Party had contested, while declaring 14 articles unconstitutional in whole or in part and imposing a restrictive interpretation on 27 others. The court accepted the specific provision that described Catalonia as a "nation", however ruled that it was a historical and cultural term with no legal weight, and that Spain remained the only nation recognised by the constitution. Government and law. The Catalan Statute of Autonomy establishes that Catalonia, as an autonomous community, is organised politically through the Generalitat of Catalonia (Catalan: ), confirmed by the Parliament, the Presidency of the Generalitat, the Government or Executive Council and the other institutions established by the Parliament, among them the Ombudsman (), the Office of Auditors () the Council for Statutory Guarantees () or the Audiovisual Council of Catalonia (). The Parliament of Catalonia (Catalan: ) is the unicameral legislative body of the Generalitat and represents the people of Catalonia. Its 135members ("diputats") are elected by universal suffrage to serve for a four-year period. According to the Statute of Autonomy, it has powers to legislate over devolved matters such as education, health, culture, internal institutional and territorial organization, nomination of the President of the Generalitat and control the Government, budget and other affairs. The last Catalan election was held on 12 May 2024, and its current speaker (president) is Josep Rull, incumbent since 10June2024. The President of the Generalitat of Catalonia (Catalan: ) is the highest representative of Catalonia, and is also responsible of leading the government's action, presiding the Executive Council. Since the restoration of the Generalitat on the return of democracy in Spain, the Presidents of Catalonia have been Josep Tarradellas (1977–1980, president in exile since 1954), Jordi Pujol (1980–2003), Pasqual Maragall (2003–2006), José Montilla (2006–2010), Artur Mas (2010–2016), Carles Puigdemont (2016–2017) and, after the imposition of direct rule from Madrid, Quim Torra (2018–2020), Pere Aragonès (2021–2024) and Salvador Illa (2024–). The Executive Council (Catalan: ) or Government (), is the body responsible of the government of the Generalitat, it holds executive and regulatory power, being accountable to the Catalan Parliament. It comprises the President of the Generalitat, the First Minister () or the Vice President, and the ministers () appointed by the president. Its seat is the Palau de la Generalitat, Barcelona. In 2021 the government was a coalition of two parties, the Republican Left of Catalonia (ERC) and Together for Catalonia (Junts) and is made up of 14 ministers, including the vice President, alongside to the president and a secretary of government, but in October2022 Together for Catalonia (Junts) left the coalition and the government. Security forces and Justice. Catalonia has its own police force, the (officially called ), whose origins date back to the 18thcentury. Since 1980 they have been under the command of the Generalitat, and since 1994 they have expanded in number in order to replace the national Civil Guard and National Police Corps, which report directly to the Homeland Department of Spain. The national bodies retain personnel within Catalonia to exercise functions of national scope such as overseeing ports, airports, coasts, international borders, custom offices, the identification of documents and arms control, immigration control, terrorism prevention, arms trafficking prevention, amongst others. Most of the justice system is administered by national judicial institutions, the highest body and last judicial instance in the Catalan jurisdiction, integrating the Spanish judiciary, is the High Court of Justice of Catalonia. The criminal justice system is uniform throughout Spain, while civil law is administered separately within Catalonia. The civil laws that are subject to autonomous legislation have been codified in the Civil Code of Catalonia () since 2002. Catalonia, together with Navarre and the Basque Country, are the Spanish communities with the highest degree of autonomy in terms of law enforcement. Administrative divisions. Catalonia is organised territorially into provinces or regions, further subdivided into comarques and municipalities. The 2006Statute of Autonomy of Catalonia establishes the administrative organisation of the later three. Provinces. Much like the rest of Spain, Catalonia is divided administratively into four provinces, the governing body of which is the Provincial Deputation (, , ). As of 2010, the four provinces and their populations were: Unlike vegueries, provinces do not follow the limitations of the subdivisional counties, notably Baixa Cerdanya, which is split in half between the demarcations of Lleida and Girona. This situation has led some isolated municipalities to request province changes from the Spanish government. Vegueries. Besides provinces, Catalonia is internally divided into eight regions or vegueries, based on the feudal administrative territorial jurisdiction of the Principality of Catalonia. Established in 2006, vegueries are used by the Generalitat de Catalunya with the aim to more effectively divide Catalonia administratively. In addition, vegueries are intended to become Catalonia's first-level administrative division and a full replacement for the four deputations of the Catalan provinces, creating a council for each vegueria, but this has not been realised as changes to the statewide provinces system are unconstitutional without a constitutional amendment. The territorial plan of Catalonia () provided six general functional areas, but was amended by Law24/2001, of 31December, recognizing "Alt Pirineu and Aran" as a new functional area differentiated of Ponent. After some opposition from some territories, it was made possible for the Aran Valley to retain its government (the vegueria is renamed to "Alt Pirineu", although the name "Alt Pirineu and Aran" is still used by the regional plan) and in 2016, the Catalan Parliament approved the eighth vegueria, Penedès, split from the Barcelona region. As of 2022, the eight regions and their populations were: Comarques. Comarques (often known as "counties" in English, but different from the historical Catalan counties) are entities composed of municipalities to internally manage their responsibilities and services. The current regional division has its roots in a decree of the Generalitat de Catalunya of 1936, in effect until 1939, when it was suppressed by Franco. In 1987 the Catalan Government reestablished the comarcal division and in 1988 three new comarques were added (Alta Ribagorça, Pla d'Urgell and Pla de l'Estany). Some further revisions have been realised since then, such as the additions of Moianès and Lluçanès counties, in 2015 and 2023 respectively. Except for Barcelonès, every comarca is administered by a comarcal council (). As of 2024, Catalonia is divided in 42 counties plus the Aran Valley. The latter, although previously (and still informally) considered a comarca, obtained in 1990 a particular status within Catalonia due to its differences in culture and language, being administered by a body known as the (General Council of Aran), and in 2015 it was defined as a "unique territorial entity" instead of a county. Municipalities. There are at present 947municipalities () in Catalonia. Each municipality is run by a council () elected every four years by the residents in local elections. The council consists of a number of members () depending on population, who elect the mayor ( or ). Its seat is the town hall (, or ). Economy. A highly industrialized region, the nominal GDP of Catalonia in 2018 was €228billion (second after the community of Madrid, €230billion) and the per capitaGDP was €30,426 ($32,888), behind Madrid (€35,041), the Basque Country (€33,223), and Navarre (€31,389). That year, the GDP growth was 2.3%. Catalonia's long-term credit rating is BB(Non-Investment Grade) according to Standard & Poor's, Ba2(Non-Investment Grade) according to Moody's, and BBB-(Low Investment Grade) according to Fitch Ratings. Catalonia's rating is tied for worst with between 1 and 5 other autonomous communities of Spain, depending on the rating agency. According to a 2020 study by Eu-Starts-Up, the Catalan capital is one of the European bases of "reference for start-ups" and the fifth city in the world to establish one of these companies, behind London, Berlin, Paris and Amsterdam. Barcelona is behind London, New York, Paris, Moscow, Tokyo, Dubai and Singapore and ahead of Los Angeles and Madrid. In the context of the 2008 financial crisis, Catalonia was expected to suffer a recession amounting to almost a 2% contraction of its regional GDP in 2009. Catalonia's debt in 2012 was the highest of all Spain's autonomous communities, reaching €13,476million, i.e. 38% of the total debt of the 17autonomous communities, but in recent years its economy recovered a positive evolution and the GDP grew a 3.3% in 2015. Catalonia is amongst the List of country subdivisions by GDP over 100 billion US dollars and is a member of the Four Motors for Europe organisation. The distribution of sectors is as follows: The main tourist destinations in Catalonia are the city of Barcelona, the beaches of the Costa Brava in Girona, the beaches of the Costa del Maresme and Costa del Garraf from Malgrat de Mar to Vilanova i la Geltrú and the Costa Daurada in Tarragona. In the High Pyrenees there are several ski resorts, near Lleida. On 1November2012, Catalonia started charging a tourist tax. The revenue is used to promote tourism, and to maintain and upgrade tourism-related infrastructure. Many of Spain's leading savings banks were based in Catalonia before the independence referendum of 2017. However, in the aftermath of the referendum, many of them moved their registered office to other parts of Spain. That includes the two biggest Catalan banks at that moment, La Caixa, which moved its office to Palma de Mallorca, and Banc Sabadell, ranked fourth among all Spanish private banks and which moved its office to Alicante. That happened after the Spanish government passed a law allowing companies to move their registered office without requiring the approval of the company's general meeting of shareholders. Overall, there was a negative net relocation rate of companies based in Catalonia moving to other autonomous communities of Spain. From the 2017 independence referendum until the end of 2018, for example, Catalonia lost 5454companies to other parts of Spain (mainly Madrid), 2359 only in 2018, gaining 467 new ones from the rest of the country during 2018. It has been reported that the Spanish government and the Spanish King Felipe VI pressured some of the big Catalan companies to move their headquarters outside of the region. The stock market of Barcelona, which in 2016 had a volume of around €152billion, is the second largest of Spain after Madrid, and Fira de Barcelona organizes international exhibitions and congresses to do with different sectors of the economy. The main economic cost for Catalan families is the purchase of a home. According to data from the Society of Appraisal on 31December2005 Catalonia is, after Madrid, the second most expensive region in Spain for housing: 3,397€/m2 on average (see Spanish property bubble). Unemployment. The unemployment rate stood at 10.5% in 2019 and was lower than the national average. Transport. Airports. Airports in Catalonia are owned and operated by Aena (a Spanish Government entity) except two airports in Lleida which are operated by Aeroports de Catalunya (an entity belonging to the Government of Catalonia). Ports. Since the Middle Ages, Catalonia has been well integrated into international maritime networks. The port of Barcelona (owned and operated by , a Spanish Government entity) is an industrial, commercial and tourist port of worldwide importance. With 1,950,000TEUs in 2015, it is the first container port in Catalonia, the third in Spain after Valencia and Algeciras in Andalusia, the 9thin the Mediterranean Sea, the 14thin Europe and the 68thin the world. It is sixth largest cruise port in the world, the first in Europe and the Mediterranean with 2,364,292passengers in 2014. The ports of Tarragona (owned and operated by Puertos del Estado) in the southwest and Palamós near Girona at northeast are much more modest. The port of Palamós and the other ports in Catalonia(26) are operated and administered by , a Catalan Government entity. The development of these infrastructures, resulting from the topography and history of the Catalan territory, responds strongly to the administrative and political organization of this autonomous community. Roads. There are of roads throughout Catalonia. The principal highways are AP-7 () and A-7 (). They follow the coast from the French border to Valencia, Murcia and Andalusia. The main roads generally radiate from Barcelona. The AP-2 () and A-2 () connect inland and onward to Madrid. Other major roads are: Public-own roads in Catalonia are either managed by the autonomous government of Catalonia (e.g., C- roads) or the Spanish government (e.g., AP-, A-, N- roads). Railways. Catalonia saw the first railway construction in the Iberian Peninsula in 1848, linking Barcelona with Mataró. Given the topography, most lines radiate from Barcelona. The city has both suburban and inter-city services. The main east coast line runs through the province connecting with the SNCF (French Railways) at Portbou on the coast. There are two publicly owned railway companies operating in Catalonia: the Catalan FGC that operates commuter and regional services, and the Spanish national Renfe that operates long-distance and high-speed rail services (AVE and Avant) and the main commuter and regional service , administered by the Catalan government since 2010. High-speed rail (AVE) services from Madrid currently reach Barcelona, via Lleida and Tarragona. The official opening between Barcelona and Madrid took place 20February2008. The journey between Barcelona and Madrid now takes about two-and-a-half hours. A connection to the French high-speed TGV network has been completed (called the Perpignan–Barcelona high-speed rail line) and the Spanish AVE service began commercial services on the line 9January2013, later offering services to Marseille on their high speed network. This was shortly followed by the commencement of commercial service by the French TGV on 17January2013, leading to an average travel time on the Paris-Barcelona TGV route of 7h42m. This new line passes through Girona and Figueres with a tunnel through the Pyrenees. Demographics. As of 2024, the official population of Catalonia was 8,067,454. 1,194,947residents did not have Spanish citizenship, accounting for about 16% of the population. The Urban Region of Barcelona includes 5,217,864people and covers an area of . The metropolitan area of the Urban Region includes cities such as L'Hospitalet de Llobregat, Sabadell, Terrassa, Badalona, Santa Coloma de Gramenet and Cornellà de Llobregat. In 1900, the population of Catalonia was 1,966,382people and in 1970 it was 5,122,567. The sizeable increase of the population was due to the demographic boom in Spain during the 1960s and early 1970s as well as in consequence of large-scale internal migration from the rural economically weak regions to its more prospering industrial cities. In Catalonia, that wave of internal migration arrived from several regions of Spain, especially from Andalusia, Murcia and Extremadura. As of 1999, it was estimated that over 60% of Catalans descended from 20thcentury migrations from other parts of Spain. Immigrants from other countries settled in Catalonia since the 1990s; a large percentage comes from Africa, Latin America and Eastern Europe, and smaller numbers from Asia and Southern Europe, often settling in urban centers such as Barcelona and industrial areas. In 2017, Catalonia had 940,497foreign residents (11.9%of the total population) with non-Spanish ID cards, without including those who acquired Spanish citizenship. Religion. Historically, all the Catalan population was Christian, specifically Catholic, but since the 1980s there has been a trend of decline of Christianity. Nevertheless, according to the most recent study sponsored by the Government of Catalonia, as of 2020, 62.3% of the Catalans identify as Christians (up from 61.9% in 2016 and 56.5% in 2014) of whom 53.0%Catholics, 7.0%Protestants and Evangelicals, 1.3%Orthodox Christians and 1.0%Jehovah's Witnesses. At the same time, 18.6% of the population identify as atheists, 8.8%as agnostics, 4.3%as Muslims, and a further 3.4% as being of other religions. Languages. Originating in the historic territory of Catalonia, Catalan is the official language of the Autonomous Community and has enjoyed special status since the approval of the Statute of Autonomy of 1979 which declares it to be "Catalonia's own language", a term which signifies a language given special legal status within a Spanish territory, or which is historically spoken within a given region.The other languages considered official in Catalonia are Spanish, which has official status throughout Spain, and Aranese Occitan, considered the "own language" of the Val d'Aran territory. Given this, the sole official language for toponymy throughout Catalonia is Catalan, except in the Val d'Aran where Occitan fulfils this role. According to the linguistic census held by the Government of Catalonia in 2013, Spanish is the most spoken language in Catalonia (46.53%claim Spanish as "their own language"), followed by Catalan (37.26%claim Catalan as "their own language"). In everyday use, 11.95%of the population claim to use both languages equally, whereas 45.92%mainly use Spanish and 35.54%mainly use Catalan. There is a significant difference between the Barcelona metropolitan area (and, to a lesser extent, the Tarragona area), where Spanish is more spoken than Catalan, and the more rural and small town areas, where Catalan clearly prevails over Spanish. Since the Statute of Autonomy of 1979, Aranese (a Gascon dialect of Occitan) has also been official and subject to special protection in Val d'Aran. This small area of 7,000inhabitants was the only place where a dialect of Occitan had received full official status. Then, on 9August2006, when the new Statute came into force, Occitan became official throughout Catalonia. Occitan is the mother tongue of 22.4% of the population of Val d'Aran, which has attracted heavy immigration from other Spanish regions to work in the service industry. Catalan Sign Language is also officially recognised. Although not considered an "official language" in the same way as Catalan, Spanish, and Occitan, the Catalan Sign Language, with about 18,000 users in Catalonia, is granted official recognition and support: "The public authorities shall guarantee the use of Catalan sign language and conditions of equality for deaf people who choose to use this language, which shall be the subject of education, protection and respect." As was the case since the ascent of the Bourbon dynasty to the throne of Spain after the War of the Spanish Succession, and with the exception of the short period of the Second Spanish Republic, under Francoist Spain Catalan was banned from schools and all other official use, so that for example families were not allowed to officially register children with Catalan names. During the Francoist period especially, the Spanish Government actively suppressed the Catalan language, criminalizing its public usage, promoting both dyglossia and linguistic substitution. Although never completely banned, Catalan language publishing was severely restricted during the early 1940s, with only religious texts and small-run self-published texts being released. Some books were published clandestinely or circumvented the restrictions by showing publishing dates prior to 1936. This policy was changed in 1946, when restricted publishing in Catalan resumed. Rural–urban migration originating in other parts of Spain also reduced the social use of Catalan in urban areas and increased the use of Spanish. Lately, a similar sociolinguistic phenomenon has occurred with foreign immigration. Catalan cultural activity increased in the 1960s and the teaching of Catalan began thanks to the initiative of associations such as Òmnium Cultural. After the end of Francoist Spain, the newly established self-governing democratic institutions in Catalonia embarked on a long-term language policy to recover the use of Catalan and has, since 1983, enforced laws which attempt to protect and extend the use of Catalan. This policy, known as the "linguistic normalisation" ( in Catalan, in Spanish) has been supported by the vast majority of Catalan political parties through the last thirty years. Some groups consider these efforts a way to discourage the use of Spanish, whereas some others, including the Catalan government and the European Union consider the policies respectful, or even as an example which "should be disseminated throughout the Union". Today, Catalan is the main language of the Catalan autonomous government and the other public institutions that fall under its jurisdiction. Basic public education is given mainly in Catalan, but also there are some hours per week of Spanish language instruction. Although the law requiring businesses to display all information (e.g. menus, posters) at least in Catalan not being systematically enforced, the majority of the linguistic landscape in Catalonia is monolingually in Catalan. There is no obligation to display this information in either Occitan or Spanish, although there is no restriction on doing so in these or other languages and sometimes private businesses will display their information in Spanish or English as well. The use of fines was introduced in a 1997 linguistic law that aims to increase the public use of Catalan and defend the rights of Catalan speakers. On the other hand, the Spanish Constitution does not recognize equal language rights for linguistic minorities since it enshrined Spanish as the only official language of the state, the knowledge of which being compulsory. Numerous laws regarding for instance the labelling of pharmaceutical products, make in effect Spanish the only language of compulsory use. The law ensures that both Catalan and Spanish – being official languages – can be used by the citizens without prejudice in all public and private activities. The Generalitat uses Catalan in its communications and notifications addressed to the general population, but citizens can also receive information from the Generalitat in Spanish if they so wish. Debates in the Catalan Parliament take place almost exclusively in Catalan and the Catalan public television broadcasts programs in Catalan. Due to the intense immigration which Spain in general and Catalonia in particular experienced in the first decade of the 21st century, many foreign languages are spoken in various cultural communities in Catalonia, of which Rif-Berber, Moroccan Arabic, Romanian and Urdu are the most common ones. In Catalonia, there is a high social and political consensus on the language policies favoring Catalan, also among Spanish speakers and speakers of other languages. However, some of these policies have been criticised for trying to promote Catalan by imposing fines on businesses. For example, following the passage of the law on Catalan cinema in March 2010, which established that half of the movies shown in Catalan cinemas had to be in Catalan, a general strike of 75% of the cinemas took place. The Catalan government gave in and dropped the clause that forced 50% of the movies to be dubbed or subtitled in Catalan before the law came to effect. On the other hand, organisations such as Plataforma per la Llengua reported different violations of the linguistic rights of the Catalan speakers in Catalonia and the other Catalan-speaking territories in Spain, most of them caused by the institutions of the Spanish government in these territories. The Catalan language policy has been challenged by some political parties in the Catalan Parliament. Citizens, a now extra-parliamentary party, is credited with breaking the consensus around language policy in the late 2000 through the 2010s. Nowadays, the far-right ultra Spanish nationalist party Vox is the main supporter of a linguistic policy that favours Spanish in Catalonia. The Catalan branch of the People's Party has a more ambiguous position on the issue: on one hand, it demands a bilingual Catalan–Spanish education and a more balanced language policy that would defend Catalan without favoring it over Spanish, whereas on the other hand, a few local PP politicians have supported in their municipalities measures privileging Catalan over Spanish and it has defended some aspects of the official language policies, sometimes against the positions of its colleagues from other parts of Spain. It must be stated, however, that historically none of these three parties have held significant offices in either regional or local government, never having governed the Generalitat, presided any Diputació or Consell Comarcal and as of 2025 having 4 mayors out of 947. Culture. Art and architecture. Catalonia has given to the world many important figures in the area of the art. Catalan painters internationally known are, among others, Salvador Dalí, Joan Miró and Antoni Tàpies. Closely linked with the Catalan pictorial atmosphere, Pablo Picasso lived in Barcelona during his youth, training them as an artist and creating the movement of cubism. Other important artists are Claudi Lorenzale for the medieval Romanticism that marked the artistic Renaixença, Marià Fortuny for the Romanticism and Catalan Orientalism of the nineteenth century, Ramon Casas or Santiago Rusiñol, main representatives of the pictorial current of Catalan modernism from the end of the nineteenth century to the beginning of the twentieth century, Josep Maria Sert for early 20th-century Noucentisme, or Josep Maria Subirachs for expressionist or abstract sculpture and painting of the late twentieth century. The most important painting museums of Catalonia are the Teatre-Museu Dalí in Figueres, the National Art Museum of Catalonia (MNAC), Picasso Museum, Fundació Antoni Tàpies, Joan Miró Foundation, the Barcelona Museum of Contemporary Art (MACBA), the Centre of Contemporary Culture of Barcelona (CCCB), and the CaixaForum. In the field of architecture were developed and adapted to Catalonia different artistic styles prevalent in Europe, leaving footprints in many churches, monasteries and cathedrals, of Romanesque (the best examples of which are located in the northern half of the territory) and Gothic styles. The Gothic developed in Barcelona and its area of influence is known as Catalan Gothic, with some particular characteristics. The church of Santa Maria del Mar is an example of this kind of style. During the Middle Ages, many fortified castles were built by feudal nobles to mark their powers. There are some examples of Renaissance (such as the Palau de la Generalitat), Baroque and Neoclassical architectures. In the late nineteenth century Modernism (Art Nouveau) appeared as the national art. The world-renowned Catalan architects of this style are Antoni Gaudí, Lluís Domènech i Montaner and Josep Puig i Cadafalch. Thanks to the urban expansion of Barcelona during the last decades of the century and the first ones of the next, many buildings of the Eixample are modernists. In the field of architectural rationalism, which turned especially relevant in Catalonia during the Republican era (1931–1939) highlighting Josep Lluís Sert and Josep Torres i Clavé, members of the GATCPAC and, in contemporany architecture, Ricardo Bofill and Enric Miralles. Monuments and World Heritage Sites. There are several UNESCO World Heritage Sites in Catalonia: Literature. The oldest surviving literary use of the Catalan language is considered to be the religious text known as Homilies d'Organyà, written either in late 11th or early 12thcentury. There are two historical moments of splendor of Catalan literature. The first begins with the historiographic chronicles of the 13thcentury (chronicles written between the thirteenth and fourteenth centuries narrating the deeds of the monarchs and leading figures of the Crown of Aragon) and the subsequent Golden Age of the 14th and 15thcenturies. After that period, between the 16th and 19thcenturies the Romantic historiography defined this era as the , considered as the "decadent" period in Catalan literature because of a general falling into disuse of the vernacular language in cultural contexts and lack of patronage among the nobility. The second moment of splendor began in the 19thcentury with the cultural and political (Renaissance) represented by writers and poets such as Jacint Verdaguer, Víctor Català (pseudonym of Caterina Albert i Paradís), Narcís Oller, Joan Maragall and Àngel Guimerà. During the 20thcentury, avant-garde movements developed, initiated by the Generation of '14 (called Noucentisme in Catalonia), represented by Eugenio d'Ors, Joan Salvat-Papasseit, Josep Carner, Carles Riba, J.V. Foix and others. During the dictatorship of Primo de Rivera, the Civil War (Generation of '36) and the Francoist period, Catalan literature was maintained despite the repression against the Catalan language, being often produced in exile. The most outstanding authors of this period are Salvador Espriu, Josep Pla, Josep Maria de Sagarra (who are considered mainly responsible for the renewal of Catalan prose), Mercè Rodoreda, Joan Oliver Sallarès or "Pere Quart", Pere Calders, Gabriel Ferrater, Manuel de Pedrolo, Agustí Bartra or Miquel Martí i Pol. In addition, several foreign writers who fought in the International Brigades, or other military units, have since recounted their experiences of fighting in their works, historical or fictional, with for example, George Orwell, in "Homage to Catalonia" (1938) or Claude Simon's "Le Palace" (1962) and "Les Géorgiques" (1981). After the transition to democracy (1975–1978) and the restoration of the Generalitat (1977), literary life and the editorial market have returned to normality and literary production in Catalan is being bolstered with a number of language policies intended to protect Catalan culture. Besides the aforementioned authors, other relevant 20th-century writers of the Francoist and democracy periods include Joan Brossa, Agustí Bartra, Manuel de Pedrolo, Pere Calders or Quim Monzó. Ana María Matute, Jaime Gil de Biedma, Manuel Vázquez Montalbán and Juan Goytisolo are among the most prominent Catalan writers in the Spanish language since the democratic restoration in Spain. Festivals and public holidays. Castells are one of the main manifestations of Catalan popular culture. The activity consists in constructing human towers by competing (teams). This practice originated in Valls, on the region of the Camp de Tarragona, during the 18th century, and later it was extended to the rest of the territory, especially in the late 20th century. The tradition of els Castells i els Castellers was declared Masterpiece of the Oral and Intangible Heritage of Humanity by UNESCO in 2010. In main celebrations, other elements of the Catalan popular culture are also usually present: parades with (giants), bigheads, stick-dancers and musicians, and the , where devils and monsters dance and spray showers of sparks using firecrackers. Another traditional celebration in Catalonia is , declared a Masterpiece of the Oral and Intangible Heritage of Humanity by the UNESCO on 25 November 2005. Christmas in Catalonia lasts two days, plus Christmas Eve. On the 25th, Christmas is celebrated, followed by a similar feast on the 26, called Sant Esteve (Saint Steve's Day). This allows families to visit and dine with different sectors of the extended family or get together with friends on the second day. One of the most deeply rooted Christmas traditions is the popular figure of the , consisting of an (often hollow) log with a face painted on it and often two little front legs appended, usually wearing a Catalan hat and scarf. The word has nothing to do with the Spanish word "tío", meaning uncle. "Tió" means log in Catalan. The log is sometimes "found in the woods" (in an event staged for children) and then adopted and taken home, where it is fed and cared for during a month or so. On Christmas Day or on Christmas Eve, a game is played where children march around the house singing a song requesting the log to poop, then they hit the log with a stick, to make it poop, and lo and behold, as if through magic, it poops candy, and sometimes other small gifts. Usually, the larger or main gifts are brought by the Three Kings on 6 January, and the tió only brings small things. In addition to traditional local Catalan culture, traditions from other parts of Spain can be found as a result of migration from other regions, for instance the celebration of the Andalusian in Catalonia. On 28 July 2010, second only after the Canary Islands, Catalonia became another Spanish territory to forbid bullfighting. The ban, which went into effect on 1 January 2012, had originated in a popular petition supported by over 180,000 signatures. Music and dance. The sardana is considered to be the most characteristic Catalan folk dance, interpreted to the rhythm of tamborí, tible and tenora (from the oboe family), trumpet, trombó (trombone), fiscorn (family of bugles) and contrabaix with three strings played by a cobla, and are danced in a circle dance. Other tunes and dances of the traditional music are the contrapàs (obsolete today), ball de bastons (the "dance of sticks"), the moixiganga, the goigs (popular songs), the galops or the jota in the southern part. The are characteristic in some marine localities of the Costa Brava, especially during the summer months when these songs are sung outdoors accompanied by a of burned rum. Art music was first developed, up to the nineteenth century and, as in much of Europe, in a liturgical setting, particularly marked by the Escolania de Montserrat. The main Western musical trends have marked these productions, medieval monodies or polyphonies, with the work of Abbot Oliba in the eleventh century or the compilation Llibre Vermell de Montserrat ("Red Book of Montserrat") from the fourteenth century. Through the Renaissance there were authors such as Pere Albert Vila, Joan Brudieu or the two Mateu Fletxa ("The Old" and "The Young"). Baroque had composers like Joan Cererols. The Romantic music was represented by composers such as Fernando Sor, Josep Anselm Clavé (father of choir movement in Catalonia and responsible of the music folk reviving) or Felip Pedrell. Modernisme also expressed in musical terms from the end of the 19th century onwards, mixing folkloric and post-romantic influences, through the works of Isaac Albéniz and Enric Granados. The avant-garde spirit initiated by the modernists is prolonged throughout the twentieth century, thanks to the activities of the Orfeó Català, a choral society founded in 1891, with its monumental concert hall, the Palau de la Música Catalana in Catalan, built by Lluís Domènech i Montaner from 1905 to 1908, the Barcelona Symphony Orchestra created in 1944 and composers, conductors and musicians engaged against the Francoism like Robert Gerhard, Eduard Toldrà and Pau Casals. Performances of opera, mostly imported from Italy, began in the 18th century, but some native operas were written as well, including the ones by Domènec Terradellas, Carles Baguer, Ramon Carles, Isaac Albéniz and Enric Granados. The Barcelona main opera house, Gran Teatre del Liceu (opened in 1847), remains one of the most important in Spain, hosting one of the most prestigious music schools in Barcelona, the Conservatori Superior de Música del Liceu. Several lyrical artists trained by this institution gained international renown during the 20th century, such as Victoria de los Ángeles, Montserrat Caballé, Giacomo Aragall and Josep Carreras. Cellist Pau Casals is admired as an outstanding player. Other popular musical styles were born in the second half of the 20th century such as Nova Cançó from the 1960s with Lluís Llach and the group Els Setze Jutges, the Catalan rumba in the 1960s with Peret, Catalan Rock from the late 1970s with La Banda Trapera del Río and Decibelios for Punk Rock, Sau, Els Pets, Sopa de Cabra or Lax'n'Busto for pop rock or Sangtraït for hard rock, electropop since the 1990s with OBK and indie pop from the 1990s. Media and cinema. Catalonia is the autonomous community, along with Madrid, that has the most media (TV, magazines, newspapers etc.). In Catalonia there is a wide variety of local and comarcal media. With the restoration of democracy, many newspapers and magazines, until then in the hands of the Franco government, were recovered in order to convert them into free and democratic media, while local radio and television began broadcasting. Televisió de Catalunya, which broadcasts entirely in the Catalan language, is the main Catalan public network. It has five channels: TV3, El 33, Super3, 3/24, Esport3 and TV3CAT. In 2018, TV3 became the first television channel to be the most viewed for nine consecutive years in Catalonia. State television that broadcasts in Catalonia in the Spanish language include (with few emissions in Catalan), Antena 3, Cuatro, Telecinco, and La Sexta. Other smaller Catalan television channels include local television channels, notably betevé, owned by the City Council of Barcelona, and broadcast in Catalan. The two main Catalan newspapers of general information are "El Periódico de Catalunya" and "La Vanguardia", both with editions in Catalan and Spanish. Catalan only published newspapers include "Ara" and "El Punt Avui" (from the fusion of "El Punt" and "Avui" in 2011), as well as most part of the local press. The Spanish newspapers, such as "El País", "El Mundo" or "La Razón", can be also acquired. Catalonia has a long tradition of use of radio, the first regular radio broadcast in the country was from Ràdio Barcelona in 1924. Today, the public Catalunya Ràdio (owned by Catalan Media Corporation) and the private RAC 1 (belonging to Grup Godó) are the two main radio stations of Catalonia, both in Catalan. Regarding the cinema, after the democratic transition, three styles have dominated since then. First, auteur cinema, in the continuity of the Barcelona School, emphasizes experimentation and form, while focusing on developing social and political themes. Worn first by Josep Maria Forn or Bigas Luna, then by Marc Recha, Jaime Rosales and Albert Serra, this genre has achieved some international recognition. Then, the documentary became another genre particularly representative of contemporary Catalan cinema, boosted by Joaquim Jordà i Català and José Luis Guerín. Later, horror films and thrillers have also emerged as a specialty of the Catalan film industry, thanks in particular to the vitality of the Sitges Film Festival, created in 1968. Several directors have gained worldwide renown thanks to this genre, starting with Jaume Balagueró and his series "REC" (co-directed with Valencian Paco Plaza), Juan Antonio Bayona and "El Orfanato" or Jaume Collet-Serra with "Orphan", "Unknown" and "Non-Stop". Catalan actors have shot for Spanish and international productions, such as Sergi López. The Museum of Cinema - Tomàs Mallol Collection (Museu del Cinema – Col.lecció Tomàs Mallol in Catalan) of Girona is home of important permanent exhibitions of cinema and pre-cinema objects. Other important institutions for the promotion of cinema are the Gaudí Awards (Premis Gaudí in Catalan, which replaced from 2009 Barcelona Film Awards themselves created in 2002), serving as equivalent for Catalonia to the Spanish Goya or French César. National symbols. Catalonia has its own representative and distinctive national symbols, some of them officially recognized, such as: In addition, various celebrations, objects, images, people or cultural icons maintain recognition at a national or international level as Catalan national symbols, such as St. George's Day (), widely celebrated festival in all Catalan towns on 23 April dedicated to the patron saint of Catalonia, and includes an exchange of books and roses between sweethearts and loved ones, therefore, serving to the same romantic purpose that of Saint Valentine's Day in Anglophone countries. Philosophy. is a form of ancestral Catalan wisdom or sensibleness. It involves well-pondered perception of situations, level-headedness, awareness, integrity, and right action. Many Catalans consider seny something unique to their culture, is based on a set of ancestral local customs stemming from the scale of values and social norms of their society. Sport. Sport has had a distinct importance in Catalan life and culture since the beginning of the 20th century; consequently, the region has a well-developed sports infrastructure. The main sports are football, basketball, handball, rink hockey, tennis and motorsport. While the most popular sports are represented at international level by the Spanish national teams, Catalonia plays as itself in some minor ones, such as korfball, futsal or rugby league. Various Catalan Sports Federations have a long tradition and some of them participated in the foundation of international sports federations, as the Catalan Federation of Rugby, that was one of the founder members of the Fédération Internationale de Rugby Amateur (FIRA) in 1934. The majority of Catalan sport federations are part of the Sports Federation Union of Catalonia (Catalan: ), founded in 1933. The presence of separate Catalan teams has caused disputes with Spanish sports institutions, as happened to roller hockey in the controversial Fresno Case (2004). The Catalan Football Federation also periodically fields a national team against international opposition, organizing friendly matches. In the recent years they have played with Bulgaria, Argentina, Brazil, Basque Country, Colombia, Nigeria, Cape Verde and Tunisia. The biggest football clubs are Barcelona (also known as "Barça"), who have won five European Cups (UEFA Champions League), and Espanyol, who have twice been runner-up of the UEFA Cup (now UEFA Europa League). As of December 2024, Barça, Espanyol and Girona FC play in the top Spanish League (La Liga). The Catalan waterpolo is one of the main powers of the Iberian Peninsula. The Catalans won triumphs in waterpolo competitions at European and world level by club (the Barcelona was champion of Europe in 1981/82 and the Catalonia in 1994/95) and national team (one gold and one silver in Olympic Games and World Championships). It also has many international synchronized swimming champions. Motorsport has a long tradition in Catalonia, which involving many people, with some world champions and several competitions organized since the beginning of the 20th century. The Circuit de Catalunya, built in 1991, is one of the main motorsport venues, holding the Catalan motorcycle Grand Prix, the Spanish F1 Grand Prix, a DTM race, and several other races. Catalonia hosted many relevant international sport events, such as the 1992 Summer Olympics in Barcelona, as well as the 1955 Mediterranean Games, the 2013 World Aquatics Championships or the 2018 Mediterranean Games. It held annually the fourth-oldest still-existing cycling stage race in the world, the Volta a Catalunya (Tour of Catalonia). Cuisine. Catalan gastronomy has a long culinary tradition. Various local food recipes have been described in documents dating from the fifteenth century. As with all the cuisines of the Mediterranean, Catatonian dishes make abundant use of fish, seafood, olive oil, bread and vegetables. Regional specialties include the (bread with tomato), which consists of bread (sometimes toasted), and tomato seasoned with olive oil and salt. Often the dish is accompanied with any number of sausages (cured botifarres, fuet, iberic ham, etc.), ham or cheeses. Others dishes include the , , (fish stew), and a dessert, Catalan cream. Catalan vineyards also have several wines, such as: Priorat, Montsant, Penedès and Empordà. There is also a sparkling wine, the cava. Catalonia is internationally recognized for its fine dining. Three of the World's 50 Best Restaurants are in Catalonia, and four restaurants have three Michelin stars, including restaurants like El Bulli or El Celler de Can Roca, both of which regularly dominate international rankings of restaurants. The region has been awarded the European Region of Gastronomy title for the year 2016.
6823
125972
https://en.wikipedia.org/wiki?curid=6823
Konstantinos Kanaris
Konstantinos Kanaris (, ; 2 September 1877), also anglicised as Constantine Kanaris or Canaris, was a Greek statesman, admiral, and a hero of the Greek War of Independence. Despite not having been a member of the revolutionary organization Filiki Eteria, his fleet engaged in several successful battles and operations against the Ottoman Navy from 1821 to 1824, most famously burning the Ottoman flagship off Chios in retaliation for the Chios massacre, which elevated him to the status of national hero. Despite the destruction of home island Psara in 1824 and the ambitious, but failed Raid on Alexandria in 1825, he remained a prominent ally of Ioannis Kapodistrias until the latter's assassination in 1831, which led to his retirement. After the 3 September 1843 Revolution, he returned to public life as a prominent member of the powerful Russian Party and became the country's second Prime Minister in 1844, presiding over the fall of his party in government. During Otto's constitutional reign, he'd return as Prime Minister in 1848 and became Minister of the Navy in 1854 amidst the Crimean War. He played a prominent role in Otto's deposition in 1862 and under George I became Prime Minister twice in 1864, resigning both times to retire in Athens. He'd return to the premiership to lead a grand coalition government in 1877 before dying of a heart attack, becoming the second Prime Minister to die in office. His most significant actions as head of government were the ratification of the country's first two constitutions while in office, in 1844 and 1864. He remains a celebrated figure among Greeks and is recognised as the maritime leader of the Greek revolutionaries during the War of Independence. Biography. Early life. Konstantinos Kanaris was born and grew up on the island of Psara, close to the island of Chios, in the Aegean. The exact year of his birth is unknown. Official records of the Hellenic Navy indicate 1795, however, modern Greek historians consider 1790 or 1793 to be more probable. He was left an orphan at a young age. Having to support himself, he chose to become a seaman like most members of his family since the beginning of the 18th century. He was subsequently hired as a boy on the brig of his uncle Dimitris Bourekas. Military career. Kanaris gained his fame during the Greek War of Independence (1821–1829). Unlike most other prominent figures of the War, he had never been initiated into the "Filiki Eteria" (Society of Friends), which played a significant role in the uprising against the Ottoman Empire, primarily by secret recruitment of supporters against the Turkish rule. By early 1821, the movement had gained enough support to launch a revolution. This seems to have inspired Kanaris, who was in Odessa at the time. He returned to the island of Psara in haste and was present when it joined the uprising on 10 April 1821. The island formed its own fleet and the famed seamen of Psara, already known for their well-equipped ships and successful battles against sea pirates, proved to be highly effective in naval warfare. Kanaris soon distinguished himself as a fire ship captain. At Chios, on the moonless night of 6–7 June 1822, forces under his command destroyed the flagship of Nasuhzade Ali Pasha, Kapudan Pasha (Grand Admiral) of the Ottoman fleet, in revenge for the Chios massacre. The admiral was holding a "Bayram" celebration, allowing Kanaris and his men to position their fire ship without being noticed. When the flagship's powder store caught fire, all men aboard were instantly killed. The Turkish casualties comprised men, both naval officers and common sailors, as well as Nasuhzade Ali Pasha himself. Kanaris led another successful attack against the Ottoman fleet at Tenedos in November 1822. He was famously said to have encouraged himself by murmuring "Konstantí, you are going to die" every time he was approaching a Turkish warship on the fire boat he was about to detonate. The Ottoman fleet captured Psara on 21 June 1824. A part of the population, including Kanaris, managed to flee the island, but those who didn't were either sold into slavery or slaughtered. After the destruction of his home island, he continued to lead attacks against Turkish forces. In August 1824, he engaged in naval combats in the Dodecanese. The following year, Kanaris led the Greek raid on Alexandria, a daring attempt to destroy the Egyptian fleet with fire ships that might have been successful if the wind had not failed just after the Greek ships entered Alexandria harbour. After the end of the War and the independence of Greece, Kanaris became an officer of the new Hellenic Navy, reaching the rank of admiral, and a prominent politician. Political career. Konstantinos Kanaris was one of the few with the personal confidence of Ioannis Kapodistrias, the first Head of State of independent Greece. After the assassination of Kapodistrias on 9 October 1831, he retired to the island of Syros. During the reign of King Otto I, Kanaris served as Minister in various governments and then as Prime Minister in the provisional government (16 February30 March 1844). He served a second term (15 October 184812 December 1849), and as Navy Minister in the 1854 cabinet of Alexandros Mavrokordatos. In 1862, he was among the rare War of Independence veterans who took part in the bloodless insurrection that deposed the increasingly unpopular King Otto I and led to the election of Prince William of Denmark as King George I of Greece. During his reign, Kanaris served as a Prime Minister for a third term (6 March16 April 1864), fourth term (26 July 186426 February 1865), and fifth and last term (7 June2 September 1877). Kanaris died on 2 September 1877 whilst still serving in office as Prime Minister. Following his death his government remained in power until 14 September 1877 without agreeing on a replacement at its head. He was buried in the First Cemetery of Athens and his heart was placed in a silver urn. Legacy. Konstantinos Kanaris is considered a national hero in Greece and ranks amongst the most notable participants of the War of Independence. Many statues and busts have been erected in his honour, such as "Kanaris at Chios" by Benedetto Civiletti in Palermo, a statue by Lazaros Fytalis in Athens, and a bust by David d'Angers. He was also featured on a Greek ₯1 coin and a ₯100 banknote issued by the Bank of Greece. To honour Kanaris, the following ships of the Hellenic Navy have been named after him: "Te Korowhakaunu / Kanáris Sound", a section of Taiari / Chalky Inlet in New Zealand's Fiordland National Park, was named after Konstantinos Kanaris by French navigator and explorer Jules de Blosseville (1802–1833). Family. In 1817, Konstantinos Kanaris married Despoina Maniatis, from a historical family of Psara. They had seven children: Wilhelm Canaris, a German Admiral, speculated that he might be a descendant of Konstantinos Kanaris. An official genealogical family history that was researched in 1938 showed however, that he was of Italian descent and not related to the Kanaris family from Greece.
6824
48071599
https://en.wikipedia.org/wiki?curid=6824
Carl Sagan
Carl Edward Sagan (; ; November 9, 1934December 20, 1996) was an American astronomer, planetary scientist and science communicator. His best known scientific contribution is his research on the possibility of extraterrestrial life, including experimental demonstration of the production of amino acids from basic chemicals by exposure to light. He assembled the first physical messages sent into space, the Pioneer plaque and the Voyager Golden Record, which are universal messages that could potentially be understood by any extraterrestrial intelligence that might find them. He argued in favor of the hypothesis, which has since been accepted, that the high surface temperatures of Venus are the result of the greenhouse effect. Initially an assistant professor at Harvard, Sagan later moved to Cornell University, where he spent most of his career. He published more than 600 scientific papers and articles and was author, co-author or editor of more than 20 books. He wrote many popular science books, such as "The Dragons of Eden", "Broca's Brain", "Pale Blue Dot" and "The Demon-Haunted World". He also co-wrote and narrated the award-winning 1980 television series "", which became the most widely watched series in the history of American public television: "Cosmos" has been seen by at least 500 million people in 60 countries. A book, also called "Cosmos", was published to accompany the series. Sagan also wrote a science-fiction novel, published in 1985, called "Contact", which became the basis for the 1997 film "Contact". His papers, comprising 595,000 items, are archived in the Library of Congress. Sagan was a popular public advocate of skeptical scientific inquiry and the scientific method; he pioneered the field of exobiology and promoted the search for extraterrestrial intelligence (SETI). He spent most of his career as a professor of astronomy at Cornell University, where he directed the Laboratory for Planetary Studies. Sagan and his works received numerous awards and honors, including the NASA Distinguished Public Service Medal, the National Academy of Sciences Public Welfare Medal, the Pulitzer Prize for General Nonfiction (for his book "The Dragons of Eden"), and (for "Cosmos: A Personal Voyage") two Emmy Awards, the Peabody Award, and the Hugo Award. He married three times and had five children. After developing myelodysplasia, Sagan died of pneumonia at the age of 62 on December 20, 1996. Early life. Childhood. Carl Edward Sagan was born on November 9, 1934, in the Bensonhurst neighborhood of New York City's Brooklyn borough. His mother, Rachel Molly Gruber (1906–1982), was a housewife from New York City; his father, Samuel Sagan (1905–1979), was a Ukrainian-born garment worker who had emigrated from Kamianets-Podilskyi (then in the Russian Empire). Sagan was named in honor of his maternal grandmother, Chaiya Clara, who had died while giving birth to her second child; she was, in Sagan's words, "the mother she [Rachel] never knew." Sagan's maternal grandfather later married a woman named Rose, who Sagan's sister, Carol, would later say, was "never accepted" as Rachel's mother because Rachel "knew she [Rose] wasn't her birth mother." Sagan's family lived in a modest apartment in Bensonhurst. He later described his family as Reform Jews, one of the more liberal of Judaism's four main branches. He and his sister agreed that their father was not especially religious, but that their mother "definitely believed in God, and was active in the temple [...] and served only kosher meat." During the worst years of the Depression, his father worked as a movie theater usher. According to biographer Keay Davidson, Sagan experienced a kind of "inner war" as a result of his close relationship with both his parents, who were in many ways "opposites." He traced his analytical inclinations to his mother, who had been extremely poor as a child in New York City during World War I and the 1920s, and whose later intellectual ambitions were sabotaged by her poverty, status as a woman and wife, and Jewish ethnicity. Davidson suggested she "worshipped her only son, Carl" because "he would fulfill her unfulfilled dreams." Sagan believed that he had inherited his sense of wonder from his father, who spent his free time giving apples to the poor or helping soothe tensions between workers and management within New York City's garment industry. Although awed by his son's intellectual abilities, Sagan's father also took his inquisitiveness in stride, viewing it as part of growing up. Later, during his career, Sagan would draw on his childhood memories to illustrate scientific points, as he did in his book "Shadows of Forgotten Ancestors". Describing his parents' influence on his later thinking, Sagan said: "My parents were not scientists. They knew almost nothing about science. But in introducing me simultaneously to skepticism and to wonder, they taught me the two uneasily cohabiting modes of thought that are central to the scientific method." He recalled that a defining moment in his development came when his parents took him, at age four, to the 1939 New York World's Fair. He later described his vivid memories of several exhibits there. One, titled "America of Tomorrow", included a moving map, which, as he recalled, "showed beautiful highways and cloverleaves and little General Motors cars all carrying people to skyscrapers, buildings with lovely spires, flying buttresses—and it looked great!" Another involved a flashlight shining on a photoelectric cell, which created a crackling sound, and another showed how the sound from a tuning fork became a wave on an oscilloscope. He also saw an exhibit of the then-nascent medium known as television. Remembering it, he later wrote: "Plainly, the world held wonders of a kind I had never guessed. How could a tone become a picture and light become a noise?" Sagan also saw one of the fair's most publicized events: the burial at Flushing Meadows of a time capsule, which contained mementos from the 1930s to be recovered by Earth's descendants in a future millennium. Davidson wrote that this "thrilled Carl." As an adult, inspired by his memories of the World's Fair, Sagan and his colleagues would create similar time capsules to be sent out into the galaxy: the Pioneer plaque and the "Voyager Golden Record" précis. During World War II, Sagan's parents worried about the fate of their European relatives, but he was generally unaware of the details of the ongoing war. He wrote, "Sure, we had relatives who were caught up in the Holocaust. Hitler was not a popular fellow in our household... but on the other hand, I was fairly insulated from the horrors of the war." His sister, Carol, said that their mother "above all wanted to protect Carl... she had an extraordinarily difficult time dealing with World War II and the Holocaust." Sagan's book "The Demon-Haunted World" (1996) included his memories of this conflicted period, when his family dealt with the realities of the war in Europe, but tried to prevent it from undermining his optimistic spirit. Soon after entering elementary school, Sagan began to express his strong inquisitiveness about nature. He recalled taking his first trips to the public library alone, at age five, when his mother got him a library card. He wanted to learn what stars were, since none of his friends or their parents could give him a clear answer: "I went to the librarian and asked for a book about stars [...] and the answer was stunning. It was that the Sun was a star, but really close. The stars were suns, but so far away they were just little points of light. The scale of the universe suddenly opened up to me. It was a kind of religious experience. There was a magnificence to it, a grandeur, a scale which has never left me. Never ever left me." When he was about six or seven, he and a close friend took trips to the American Museum of Natural History, in Manhattan. While there, they visited the Hayden Planetarium and walked around exhibits of space objects, such as meteorites, as well as displays of dinosaur skeletons and naturalistic scenes with animals. As Sagan later wrote, "I was transfixed by the dioramas—lifelike representations of animals and their habitats all over the world. Penguins on the dimly lit Antarctic ice [...] a family of gorillas, the male beating his chest [...] an American grizzly bear standing on his hind legs, ten or twelve feet tall, and staring me right in the eye." Sagan's parents nurtured his growing interest in science, buying him chemistry sets and reading matter. But his fascination with outer space emerged as his primary focus, especially after he had read science fiction by such writers as H. G. Wells and Edgar Rice Burroughs, stirring his curiosity about the possibility of life on Mars and other planets. According to biographer Ray Spangenburg, Sagan's efforts in his early years to understand the mysteries of the planets became a "driving force in his life, a continual spark to his intellect, and a quest that would never be forgotten." In 1947, Sagan discovered the magazine "Astounding Science Fiction", which introduced him to more hard science fiction speculations than those in the Burroughs novels. That same year, mass hysteria developed about the possibility that extraterrestrial visitors had arrived in flying saucers, and the young Sagan joined in the speculation that the flying "discs" people reported seeing in the sky might be alien spaceships. Education. Sagan attended David A. Boody Junior High School in his native Bensonhurst and had his bar mitzvah when he turned 13. In 1948, when he was 14, his father's work took the family to the older semi-industrial town of Rahway, New Jersey, where he attended Rahway High School. He was a straight-A student but was bored because his classes did not challenge him and his teachers did not inspire him. His teachers realized this and tried to convince his parents to send him to a private school, with an administrator telling them, "This kid ought to go to a school for gifted children, he has something really remarkable." However, his parents could not afford to do so. Sagan became president of the school's chemistry club, and set up his own laboratory at home. He taught himself about molecules by making cardboard cutouts to help him visualize how they were formed: "I found that about as interesting as doing [chemical] experiments." He was mostly interested in astronomy, learning about it in his spare time. In his junior year of high school, he discovered that professional astronomers were paid for doing something he always enjoyed, and decided on astronomy as a career goal: "That was a splendid day—when I began to suspect that if I tried hard I could do astronomy full-time, not just part-time." Sagan graduated from Rahway High School in 1951. Before the end of high school, Sagan entered an essay writing contest in which he explored the idea that human contact with advanced life forms from another planet might be as disastrous for people on Earth as Native Americans' first contact with Europeans had been for Native Americans. The subject was considered controversial, but his rhetorical skill won over the judges and they awarded him first prize. When he was about to graduate from high school, his classmates voted him "most likely to succeed" and put him in line to be valedictorian. He attended the University of Chicago because, despite his excellent high school grades, it was one of the very few colleges he had applied to that would consider accepting a 16-year-old. Its chancellor, Robert Maynard Hutchins, had recently retooled the undergraduate College of the University of Chicago into an "ideal meritocracy" built on Great Books, Socratic dialogue, comprehensive examinations, and early entrance to college with no age requirement. As an honors-program undergraduate, Sagan worked in the laboratory of geneticist H. J. Muller and wrote a thesis on the origins of life with physical chemist Harold Urey. He also joined the Ryerson Astronomical Society. In 1954, he was awarded a Bachelor of Liberal Arts with general and special honors in what he quipped was "nothing." In 1955, he earned a Bachelor of Science in physics. He went on to do graduate work at the University of Chicago, earning a Master of Science in physics in 1956 and a Doctor of Philosophy in astronomy and astrophysics in 1960. His doctoral thesis, submitted to the Department of Astronomy and Astrophysics, was entitled "Physical Studies of the Planets". During his graduate studies, he used the summer months to work with planetary scientist Gerard Kuiper, who was his dissertation director, as well as physicist George Gamow and chemist Melvin Calvin. The title of Sagan's dissertation reflected interests he had in common with Kuiper, who had been president of the International Astronomical Union's commission on "Physical Studies of Planets and Satellites" throughout the 1950s. In 1958, Sagan and Kuiper worked on the classified military Project A119, a secret United States Air Force plan to detonate a nuclear warhead on the Moon and document its effects. Sagan had a Top Secret clearance at the Air Force and a Secret clearance with NASA. In 1999, an article published in the journal "Nature" revealed that Sagan had included the classified titles of two Project A119 papers in his 1959 application for a scholarship to University of California, Berkeley. A follow-up letter to the journal by project leader Leonard Reiffel confirmed Sagan's security leak. Career and research. From 1960 to 1962, Sagan was a Miller Fellow at the University of California, Berkeley. Meanwhile, he published an article in 1961 in the journal "Science" on the atmosphere of Venus, while also working with NASA's Mariner 2 team, and served as a "Planetary Sciences Consultant" to the RAND Corporation. After the publication of Sagan's "Science" article, in 1961, Harvard University astronomers Fred Whipple and Donald Menzel offered Sagan the opportunity to give a colloquium at Harvard and subsequently offered him a lecturer position at the institution. Sagan instead asked to be made an assistant professor, and eventually Whipple and Menzel were able to convince Harvard to offer Sagan the assistant professor position he requested. Sagan lectured, performed research, and advised graduate students at the institution from 1963 until 1968, as well as working at the Smithsonian Astrophysical Observatory, also located in Cambridge, Massachusetts. In 1968, Sagan was denied academic tenure at Harvard. He later indicated that the decision was very unexpected. The denial has been blamed on several factors, including that he focused his interests too broadly across a number of areas (while the norm in academia is to become a renowned expert in a narrow specialty), and perhaps because of his well-publicized scientific advocacy, which some scientists perceived as borrowing the ideas of others for little more than self-promotion. An advisor from his years as an undergraduate student, Harold Urey, wrote a letter to the tenure committee recommending strongly against tenure for Sagan. Long before the ill-fated tenure process, Cornell University astronomer Thomas Gold had courted Sagan to move to Ithaca, New York, and join the recently hired astronomer Frank Drake among the faculty at Cornell. Following the denial of tenure from Harvard, Sagan accepted Gold's offer and remained a faculty member at Cornell for nearly 30 years until his death in 1996. Unlike Harvard, the smaller and more laid-back astronomy department at Cornell welcomed Sagan's growing celebrity status. Following two years as an associate professor, Sagan became a full professor at Cornell in 1970 and directed the Laboratory for Planetary Studies there. From 1972 to 1981, he was associate director of the Center for Radiophysics and Space Research (CRSR) at Cornell. In 1976, he became the David Duncan Professor of Astronomy and Space Sciences, a position he held for the remainder of his life. Sagan was associated with the U.S. space program from its inception. From the 1950s onward, he worked as an advisor to NASA, where one of his duties included briefing the Apollo astronauts before their flights to the Moon. Sagan contributed to many of the robotic spacecraft missions that explored the Solar System, arranging experiments on many of the expeditions. Sagan assembled the first physical message that was sent into space: a gold-plated plaque, attached to the space probe "Pioneer 10", launched in 1972. "Pioneer 11", also carrying another copy of the plaque, was launched the following year. He continued to refine his designs; the most elaborate message he helped to develop and assemble was the Voyager Golden Record, which was sent out with the Voyager space probes in 1977. Sagan often challenged the decisions to fund the Space Shuttle and the International Space Station at the expense of further robotic missions. Scientific achievements. Former student David Morrison described Sagan as "an 'idea person' and a master of intuitive physical arguments and 'back of the envelope' calculations", and Gerard Kuiper said that "Some persons work best in specializing on a major program in the laboratory; others are best in liaison between sciences. Dr. Sagan belongs in the latter group." Sagan's contributions were central to the discovery of the high surface temperatures of the planet Venus. In the early 1960s no one knew for certain the basic conditions of Venus' surface, and Sagan listed the possibilities in a report later depicted for popularization in a Time Life book "Planets". His own view was that Venus was dry and very hot as opposed to the balmy paradise others had imagined. He had investigated radio waves from Venus and concluded that there was a surface temperature of . As a visiting scientist to NASA's Jet Propulsion Laboratory, he contributed to the first Mariner missions to Venus, working on the design and management of the project. Mariner 2 confirmed his conclusions on the surface conditions of Venus in 1962. Sagan was among the first to hypothesize that Saturn's moon Titan might possess oceans of liquid compounds on its surface and that Jupiter's moon Europa might possess subsurface oceans of water. This would make Europa potentially habitable. Europa's subsurface ocean of water was later indirectly confirmed by the spacecraft "Galileo". The mystery of Titan's reddish haze was also solved with Sagan's help. The reddish haze was revealed to be due to complex organic molecules constantly raining down onto Titan's surface. Sagan further contributed insights regarding the atmospheres of Venus and Jupiter, as well as seasonal changes on Mars. He also perceived global warming as a growing, man-made danger and likened it to the natural development of Venus into a hot, life-hostile planet through a kind of runaway greenhouse effect. He testified to the US Congress in 1985 that the greenhouse effect would change the Earth's climate system. Sagan and his Cornell colleague Edwin Ernest Salpeter speculated about life in Jupiter's clouds, given the planet's dense atmospheric composition rich in organic molecules. He studied the observed color variations on Mars' surface and concluded that they were not seasonal or vegetational changes as most believed, but shifts in surface dust caused by windstorms. Sagan is also known for his research on the possibilities of extraterrestrial life, including experimental demonstration of the production of amino acids from basic chemicals by radiation. He is also the 1994 recipient of the Public Welfare Medal, the highest award of the National Academy of Sciences for "distinguished contributions in the application of science to the public welfare." He was denied membership in the academy, reportedly because his media activities made him unpopular with many other scientists. , Sagan is the most cited SETI scientist and one of the most cited planetary scientists. "Cosmos": popularizing science on TV. In 1980, Sagan co-wrote and narrated the award-winning 13-part PBS television series "", which became the most widely watched series in the history of American public television until 1990. The show has been seen by at least 500 million people across 60 countries. The book, "Cosmos", written by Sagan, was published to accompany the series. Because of his earlier popularity as a science writer from his best-selling books, including "The Dragons of Eden", which won him a Pulitzer Prize in 1977, he was asked to write and narrate the show. It was targeted to a general audience of viewers, who Sagan felt had lost interest in science, partly due to a stifled educational system. Each of the 13 episodes was created to focus on a particular subject or person, thereby demonstrating the synergy of the universe. They covered a wide range of scientific subjects including the origin of life and a perspective of humans' place on Earth. The show won an Emmy, along with a Peabody Award, and transformed Sagan from an obscure astronomer into a pop-culture icon. "Time" magazine ran a cover story about Sagan soon after the show broadcast, referring to him as "creator, chief writer and host-narrator of the show." In 2000, "Cosmos" was released on a remastered set of DVDs. "Billions and billions". After "Cosmos" aired, Sagan became associated with the catchphrase "billions and billions", although he never actually used the phrase in the "Cosmos" series. He rather used the term "billions "upon" billions." Richard Feynman, a precursor to Sagan, used the phrase "billions and billions" many times in his "red books." However, Sagan's frequent use of the word "billions" and distinctive delivery emphasizing the "b" (which he did intentionally, in place of more cumbersome alternatives such as "billions with a 'b, in order to distinguish the word from "millions") made him a favorite target of comic performers, including Johnny Carson, Gary Kroeger, Mike Myers, Bronson Pinchot, Penn Jillette, Harry Shearer, and others. Frank Zappa satirized the line in the song "Be in My Video", noting as well "atomic light." Sagan took this all in good humor, and his final book was titled "Billions and Billions", which opened with a tongue-in-cheek discussion of this catchphrase, observing that Carson was an amateur astronomer and that Carson's comic caricature often included real science. As a humorous tribute to Sagan and his association with the catchphrase "billions and billions", a "sagan" has been defined as a unit of measurement equivalent to a very large number of anything. Sagan's number. Sagan's number is the number of stars in the observable universe. This number is reasonably well defined, because it is known what stars are and what the observable universe is, but its value is highly uncertain. Scientific and critical thinking advocacy. Sagan's ability to convey his ideas allowed many people to understand the cosmos better—simultaneously emphasizing the value and worthiness of the human race, and the relative insignificance of the Earth in comparison to the Universe. He delivered the 1977 series of Royal Institution Christmas Lectures in London. Sagan was a proponent of the search for extraterrestrial life. He urged the scientific community to listen with radio telescopes for signals from potential intelligent extraterrestrial life-forms. Sagan was so persuasive that by 1982 he was able to get a petition advocating SETI published in the journal "Science", signed by 70 scientists, including seven Nobel Prize winners. This signaled a tremendous increase in the respectability of a then-controversial field. Sagan also helped Frank Drake write the Arecibo message, a radio message beamed into space from the Arecibo radio telescope on November 16, 1974, aimed at informing potential extraterrestrials about Earth. Sagan was chief technology officer of the professional planetary research journal "Icarus" for 12 years. He co-founded The Planetary Society and was a member of the SETI Institute Board of Trustees. Sagan served as Chairman of the Division for Planetary Science of the American Astronomical Society, as President of the Planetology Section of the American Geophysical Union, and as Chairman of the Astronomy Section of the American Association for the Advancement of Science (AAAS). At the height of the Cold War, Sagan became involved in nuclear disarmament efforts by promoting hypotheses on the effects of nuclear war, when Paul Crutzen's "Twilight at Noon" concept suggested that a substantial nuclear exchange could trigger a nuclear twilight and upset the delicate balance of life on Earth by cooling the surface. In 1983, he was one of five authors—the "S"—in the follow-up "TTAPS" model (as the research article came to be known), which contained the first use of the term "nuclear winter", which his colleague Richard P. Turco had coined. In 1984, he co-authored the book "" and in 1990, the book "A Path Where No Man Thought: Nuclear Winter and the End of the Arms Race", which explains the nuclear-winter hypothesis and advocates nuclear disarmament. Sagan received a great deal of skepticism and disdain for the use of media to disseminate a very uncertain hypothesis. A personal correspondence with nuclear physicist Edward Teller around 1983 began amicably, with Teller expressing support for continued research to ascertain the credibility of the winter hypothesis. However, Sagan and Teller's correspondence would ultimately result in Teller writing: "A propagandist is one who uses incomplete information to produce maximum persuasion. I can compliment you on being, indeed, an excellent propagandist, remembering that a propagandist is the better the less he appears to be one." Biographers of Sagan would also comment that from a scientific viewpoint, nuclear winter was a low point for Sagan, although, politically speaking, it popularized his image among the public. The adult Sagan remained a fan of science fiction, although disliking stories that were not realistic (such as ignoring the inverse-square law) or, he said, did not include "thoughtful pursuit of alternative futures." He wrote books to popularize science, such as "Cosmos", which reflected and expanded upon some of the themes of "A Personal Voyage" and became the best-selling science book ever published in English; "The Dragons of Eden: Speculations on the Evolution of Human Intelligence", which won a Pulitzer Prize; and "Broca's Brain: Reflections on the Romance of Science". Sagan also wrote the best-selling science fiction novel "Contact" in 1985, based on a film treatment he wrote with his wife, Ann Druyan, in 1979, but he did not live to see the book's 1997 motion-picture adaptation, which starred Jodie Foster and won the 1998 Hugo Award for Best Dramatic Presentation. Sagan wrote a sequel to "Cosmos", "Pale Blue Dot: A Vision of the Human Future in Space", which was selected as a notable book of 1995 by "The New York Times". He appeared on PBS's "Charlie Rose" program in January 1995. Sagan also wrote the introduction for Stephen Hawking's bestseller "A Brief History of Time". Sagan was also known for his popularization of science, his efforts to increase scientific understanding among the general public, and his positions in favor of scientific skepticism and against pseudoscience, such as his debunking of the Betty and Barney Hill abduction. To mark the tenth anniversary of Sagan's death, David Morrison, a former student of Sagan, recalled "Sagan's immense contributions to planetary research, the public understanding of science, and the skeptical movement" in "Skeptical Inquirer". Following Saddam Hussein's threats to light Kuwait's oil wells on fire in response to any physical challenge to Iraqi control of the oil assets, Sagan together with his "TTAPS" colleagues and Paul Crutzen, warned in January 1991 in "The Baltimore Sun" and "Wilmington Morning Star" newspapers that if the fires were left to burn over a period of several months, enough smoke from the 600 or so 1991 Kuwaiti oil fires "might get so high as to disrupt agriculture in much of South Asia ..." and that this possibility should "affect the war plans"; these claims were also the subject of a televised debate between Sagan and physicist Fred Singer on January 22, aired on the ABC News program "Nightline". In the televised debate, Sagan argued that the effects of the smoke would be similar to the effects of a nuclear winter, with Singer arguing to the contrary. After the debate, the fires burnt for many months before extinguishing efforts were complete. The results of the smoke did not produce continental-sized cooling. Sagan later conceded in "The Demon-Haunted World" that the prediction did not turn out to be correct: "it "was" pitch black at noon and temperatures dropped 4–6 °C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared." In his later years, Sagan advocated the creation of an organized search for asteroids/near-Earth objects (NEOs) that might impact the Earth but to forestall or postpone developing the technological methods that would be needed to defend against them. He argued that all of the numerous methods proposed to alter the orbit of an asteroid, including the employment of nuclear detonations, created a deflection dilemma: if the ability to deflect an asteroid away from the Earth exists, then one would also have the ability to divert a non-threatening object towards Earth, creating an immensely destructive weapon. In a 1994 paper he co-authored, he ridiculed a three-day-long "Near-Earth Object Interception Workshop" held by Los Alamos National Laboratory (LANL) in 1993 that did not, "even in passing" state that such interception and deflection technologies could have these "ancillary dangers." Sagan remained hopeful that the natural NEO impact threat and the intrinsically double-edged essence of the methods to prevent these threats would serve as a "new and potent motivation to maturing international relations." Later acknowledging that, with sufficient international oversight, in the future a "work our way up" approach to implementing nuclear explosive deflection methods could be fielded, and when sufficient knowledge was gained, to use them to aid in mining asteroids. His interest in the use of nuclear detonations in space grew out of his work in 1958 for the Armour Research Foundation's Project A119, concerning the possibility of detonating a nuclear device on the lunar surface. Sagan was a critic of Plato, having said of the ancient Greek philosopher: "Science and mathematics were to be removed from the hands of the merchants and the artisans. This tendency found its most effective advocate in a follower of Pythagoras named Plato" and In 1995 (as part of his book "The Demon-Haunted World"), Sagan popularized a set of tools for skeptical thinking called the "baloney detection kit", a phrase first coined by Arthur Felberbaum, a friend of his wife Ann Druyan. Popularizing science. Speaking about his activities in popularizing science, Sagan said that there were at least two reasons for scientists to share the purposes of science and its contemporary state. Simple self-interest was one: much of the funding for science came from the public, and the public therefore had the right to know how the money was being spent. If scientists increased public admiration for science, there was a good chance of having more public supporters. The other reason was the excitement of communicating one's own excitement about science to others. Following the success of "Cosmos", Sagan set up his own publishing firm, Cosmos Store, to publish science books for the general public. It was not successful. Criticisms. While Sagan was widely adored by the general public, his reputation in the scientific community was more polarized. Critics sometimes characterized his work as fanciful, non-rigorous, and self-aggrandizing, and others complained in his later years that he neglected his role as a faculty member to foster his celebrity status. One of Sagan's harshest critics, Harold Urey, felt that Sagan was getting too much publicity for a scientist and was treating some scientific theories too casually. Urey and Sagan were said to have different philosophies of science, according to Davidson. While Urey was an "old-time empiricist" who avoided theorizing about the unknown, Sagan was by contrast willing to speculate openly about such matters. Fred Whipple wanted Harvard to keep Sagan there, but learned that because Urey was a Nobel laureate, his opinion was an important factor in Harvard denying Sagan tenure. Sagan's Harvard friend Lester Grinspoon also stated: "I know Harvard well enough to know there are people there who certainly do not like people who are outspoken." Grinspoon added: Some, like Urey, later believed that Sagan's popular brand of scientific advocacy was beneficial to the science as a whole. Urey especially liked Sagan's 1977 book "The Dragons of Eden" and wrote Sagan with his opinion: "I like it very much and am amazed that someone like you has such an intimate knowledge of the various features of the problem... I congratulate you... You are a man of many talents." Sagan was accused of borrowing some ideas of others for his own benefit and countered these claims by explaining that the misappropriation was an unfortunate side effect of his role as a science communicator and explainer, and that he attempted to give proper credit whenever possible. Social concerns. Sagan believed that the Drake equation, on substitution of reasonable estimates, suggested that a large number of extraterrestrial civilizations would form, but that the lack of evidence of such civilizations highlighted by the Fermi paradox suggests technological civilizations tend to self-destruct. This stimulated his interest in identifying and publicizing ways that humanity could destroy itself, with the hope of avoiding such a cataclysm and eventually becoming a spacefaring species. Sagan's deep concern regarding the potential destruction of human civilization in a nuclear holocaust was conveyed in a memorable cinematic sequence in the final episode of "Cosmos", called "Who Speaks for Earth?" Sagan had already resigned from the Air Force Scientific Advisory Board's UFO-investigating Condon Committee and voluntarily surrendered his top-secret clearance in protest over the Vietnam War. Following his marriage to his third wife (novelist Ann Druyan) in June 1981, Sagan became more politically active—particularly in opposing escalation of the nuclear arms race under President Ronald Reagan. In March 1983, Reagan announced the Strategic Defense Initiative—a multibillion-dollar project to develop a comprehensive defense against attack by nuclear missiles, which was quickly dubbed the "Star Wars" program. Sagan, along with other scientists, spoke out against the project, arguing that it was technically impossible to develop a system with the level of perfection required, and far more expensive to build such a system than it would be for an enemy to defeat it through decoys and other means—and that its construction would seriously destabilize the "nuclear balance" between the United States and the Soviet Union, making further progress toward nuclear disarmament impossible. When Soviet leader Mikhail Gorbachev declared a unilateral moratorium on the testing of nuclear weapons, which would begin on August 6, 1985—the 40th anniversary of the atomic bombing of Hiroshima—the Reagan administration dismissed the dramatic move as nothing more than propaganda and refused to follow suit. In response, US anti-nuclear and peace activists staged a series of protest actions at the Nevada Test Site, beginning on Easter Sunday in 1986 and continuing through 1987. Hundreds of people in the "Nevada Desert Experience" group were arrested, including Sagan, who was arrested on two separate occasions as he climbed over a chain-link fence at the test site during the underground Operation Charioteer and United States's Musketeer nuclear test series of detonations. Sagan was also a vocal advocate of the controversial notion of testosterone poisoning, arguing in 1992 that human males could become gripped by an "unusually severe [case of] testosterone poisoning" and this could compel them to become genocidal. In his review of Moondance magazine writer Daniela Gioseffi's 1990 book "Women on War", he argues that females are the only half of humanity "untainted by testosterone poisoning." One chapter of his 1993 book "Shadows of Forgotten Ancestors" is dedicated to testosterone and its alleged poisonous effects. In 1989, Carl Sagan was interviewed by Ted Turner whether he believed in socialism and responded that: "I'm not sure what a socialist is. But I believe the government has a responsibility to care for the people... I'm talking about making the people self-reliant." Personal life and beliefs. Sagan was married three times. In 1957, he married biologist Lynn Margulis. The couple had two children, Jeremy and Dorion Sagan. According to Margulis, Sagan could be physically abusive and insisted she do the majority of the domestic duties. Their marriage ended in 1964. Sagan married artist Linda Salzman in 1968 and they had a child together, Nick Sagan, and divorced in 1981. During these marriages, Carl Sagan focused heavily on his career, a factor which may have contributed to Sagan's first divorce. In 1981, Sagan married author Ann Druyan and they later had two children, Alexandra (known as Sasha) and Samuel Sagan. Carl Sagan and Druyan remained married until his death in 1996. While teaching at Cornell, he lived in an Egyptian revival house in Ithaca perched on the edge of a cliff that had formerly been the headquarters of a Cornell University secret society. While there he drove a red Porsche 911 Targa and an orange 1970 Porsche 914 with the license plate PHOBOS. In 1994, engineers at Apple Computer code-named the Power Macintosh 7100 "Carl Sagan" in the hope that Apple would make "billions and billions" with the sale of the PowerMac 7100. The name was only used internally, but Sagan was concerned that it would become a product endorsement and sent Apple a cease-and-desist letter. Apple complied, but engineers retaliated by changing the internal codename to "BHA" for "Butt-Head Astronomer." In November 1995, after further legal battle, an out-of-court settlement was reached and Apple's office of trademarks and patents released a conciliatory statement that "Apple has always had great respect for Dr. Sagan. It was never Apple's intention to cause Dr. Sagan or his family any embarrassment or concern." In 2019, Carl Sagan's daughter Sasha Sagan released "For Small Creatures Such as We: Rituals for Finding Meaning in our Unlikely World", which depicts life with her parents and her father's death when she was fourteen. Building on a theme in her father's work, Sasha Sagan argues in "For Small Creatures Such as We" that skepticism does not imply pessimism. Sagan was acquainted with science fiction fandom through his friendship with Isaac Asimov, and he spoke at the Nebula Awards ceremony in 1969. Asimov described Sagan as one of only two people he ever met whose intellect surpassed his own, the other being computer scientist and artificial intelligence expert Marvin Minsky. Naturalism. Sagan wrote frequently about religion and the relationship between religion and science, expressing his skepticism about the conventional conceptualization of God as a sapient being. For example: In another description of his view on the concept of God, Sagan wrote: On atheism, Sagan said in 1981: Sagan also commented on Christianity and the Jefferson Bible, stating "My long-time view about Christianity is that it represents an amalgam of two seemingly immiscible parts, the religion of Jesus and the religion of Paul. Thomas Jefferson attempted to excise the Pauline parts of the New Testament. There wasn't much left when he was done, but it was an inspiring document." Sagan thought that spirituality should be scientifically informed and that traditional religions should be abandoned and replaced with belief systems that revolve around the scientific method, but also the mystery and incompleteness of scientific fields. Regarding spirituality and its relationship with science, Sagan stated: An environmental appeal, "Preserving and Cherishing the Earth", primarily written by Sagan and signed by him and other noted scientists as well as religious leaders, and published in January 1990, stated that "The historical record makes clear that religious teaching, example, and leadership are powerfully able to influence personal conduct and commitment... Thus, there is a vital role for religion and science." In reply to a question in 1996 about his religious beliefs, Sagan said he was agnostic. Sagan maintained that the idea of a creator God of the Universe was difficult to prove or disprove and that the only conceivable scientific discovery that could challenge it would be an infinitely old universe. His son, Dorion Sagan said, "My father believed in the God of Spinoza and Einstein, God not behind nature but as nature, equivalent to it." His last wife, Ann Druyan, said: In 2006, Druyan edited Sagan's 1985 Glasgow "Gifford Lectures in Natural Theology" into a book, "", in which he elaborates on his views of divinity in the natural world. Sagan is also widely regarded as a freethinker or skeptic. One of his most famous quotations, "extraordinary claims require extraordinary evidence", from "Cosmos", is called the "Sagan standard" by some. It was based on a nearly identical statement by fellow founder of the Committee for the Scientific Investigation of Claims of the Paranormal, Marcello Truzzi, "An extraordinary claim requires extraordinary proof." This idea had been earlier aphorized in Théodore Flournoy's work "From India to the Planet Mars" (1899) from a longer quote by Pierre-Simon Laplace (1749–1827), a French mathematician and astronomer, as the Principle of Laplace: "The weight of the evidence should be proportioned to the strangeness of the facts." Late in his life, Sagan's books elaborated on his naturalistic view of the world. In "The Demon-Haunted World", he presented tools for testing arguments and detecting fallacious or fraudulent ones, essentially advocating the wide use of critical thinking and of the scientific method. The compilation "Billions and Billions: Thoughts on Life and Death at the Brink of the Millennium", published in 1997 after Sagan's death, contains essays written by him, on topics such as his views on abortion, and also an essay by his widow, Ann Druyan, about the relationship between his agnostic and freethinking beliefs and his death. Sagan warned against humans' tendency towards anthropocentrism. He was the faculty adviser for the Cornell Students for the Ethical Treatment of Animals. In the "Cosmos" chapter "Blues For a Red Planet", Sagan wrote, "If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes." Marijuana advocacy. Sagan was a user and advocate of marijuana. Under the pseudonym "Mr. X", he contributed an essay about smoking cannabis to the 1971 book "Marihuana Reconsidered". The essay explained that marijuana use had helped to inspire some of Sagan's works and enhance sensual and intellectual experiences. After Sagan's death, his friend Lester Grinspoon disclosed this information to Sagan's biographer, Keay Davidson. The publishing of the biography "Carl Sagan: A Life", in 1999 brought media attention to this aspect of Sagan's life. Not long after his death, his widow Ann Druyan went on to preside over the board of directors of the National Organization for the Reform of Marijuana Laws (NORML), a non-profit organization dedicated to reforming cannabis laws. UFOs. In 1947, the year that inaugurated the "flying saucer" craze, the young Sagan suspected the "discs" might be alien spaceships. Sagan's interest in UFO reports prompted him on August 3, 1952, to write a letter to U.S. Secretary of State Dean Acheson to ask how the United States would respond if flying saucers turned out to be extraterrestrial. He later had several conversations on the subject in 1964 with Jacques Vallée. Though quite skeptical of any extraordinary answer to the UFO question, Sagan thought scientists should study the phenomenon, at least because there was widespread public interest in UFO reports. Stuart Appelle notes that Sagan "wrote frequently on what he perceived as the logical and empirical fallacies regarding UFOs and the abduction experience. Sagan rejected an extraterrestrial explanation for the phenomenon but felt there were both empirical and pedagogical benefits for examining UFO reports and that the subject was, therefore, a legitimate topic of study." In 1966, Sagan was a member of the Ad Hoc Committee to Review Project Blue Book, the U.S. Air Force's UFO investigation project. The committee concluded Blue Book had been lacking as a scientific study, and recommended a university-based project to give the UFO phenomenon closer scientific scrutiny. The result was the Condon Committee (1966–68), led by physicist Edward Condon, and in their final report they formally concluded that UFOs, regardless of what any of them actually were, did not behave in a manner consistent with a threat to national security. Sociologist Ron Westrum writes that "The high point of Sagan's treatment of the UFO question was the AAAS' symposium in 1969. A wide range of educated opinions on the subject were offered by participants, including not only proponents such as James McDonald and J. Allen Hynek but also skeptics like astronomers William Hartmann and Donald Menzel. The roster of speakers was balanced, and it is to Sagan's credit that this event was presented in spite of pressure from Edward Condon." With physicist Thornton Page, Sagan edited the lectures and discussions given at the symposium; these were published in 1972 as "UFO's: A Scientific Debate". Some of Sagan's many books examine UFOs (as did one episode of "Cosmos") and he claimed a religious undercurrent to the phenomenon. Sagan again revealed his views on interstellar travel in his 1980 "Cosmos" series. In one of his last written works, Sagan argued that the chances of extraterrestrial spacecraft visiting Earth are vanishingly small. However, Sagan did think it plausible that Cold War concerns contributed to governments misleading their citizens about UFOs, and wrote that "some UFO reports and analyses, and perhaps voluminous files, have been made inaccessible to the public which pays the bills ... It's time for the files to be declassified and made generally available." He cautioned against jumping to conclusions about suppressed UFO data and stressed that there was no strong evidence that aliens were visiting the Earth either in the past or present. Sagan briefly served as an adviser on Stanley Kubrick's film "". Sagan proposed that the film suggest, rather than depict, extraterrestrial superintelligence. Death. After suffering from myelodysplasia for two years and receiving three bone marrow transplants from his sister, Sagan died from pneumonia at the age of 62 at the Fred Hutchinson Cancer Research Center in Seattle on December 20, 1996. He was buried at Lake View Cemetery in Ithaca, New York. Awards and honors. Posthumous recognition. Sites named after him. In 1997, the Sagan Planet Walk was opened in Ithaca, New York. It is a walking-scale model of the Solar System, extending 1.2 km from the center of The Commons in downtown Ithaca to the Sciencenter, a hands-on museum. The exhibition was created in memory of Carl Sagan, who was an Ithaca resident and Cornell Professor. Professor Sagan had been a founding member of the museum's advisory board. The landing site of the uncrewed "Mars Pathfinder" spacecraft was renamed the Carl Sagan Memorial Station on July 5, 1997. Asteroid 2709 Sagan is named in his honor, as is the Carl Sagan Institute for the search of habitable planets. On November 9, 2001, on what would have been Sagan's 67th birthday, the Ames Research Center dedicated the site for the Carl Sagan Center for the Study of Life in the Cosmos. "Carl was an incredible visionary, and now his legacy can be preserved and advanced by a 21st century research and education laboratory committed to enhancing our understanding of life in the universe and furthering the cause of space exploration for all time", said NASA Administrator Daniel Goldin. Ann Druyan was at the center as it opened its doors on October 22, 2006. On October 21, 2019, the Carl Sagan and Ann Druyan Theater was opened at the Center for Inquiry West in Los Angeles. Awards named after him. Sagan has at least three awards named in his honor: Awards given him. August 2007 the Independent Investigations Group (IIG) awarded Sagan posthumously a Lifetime Achievement Award. This honor has also been awarded to Harry Houdini and James Randi. In 2022, Sagan was posthumously awarded the Future of Life Award "for reducing the risk of nuclear war by developing and popularizing the science of nuclear winter." The honor, shared by seven other recipients involved in nuclear winter research, was accepted by his widow, Ann Druyan. In popular culture. The 1997 film "Contact" was based on the only novel Sagan wrote and finished after his death. It ends with the dedication "For Carl." His photo can also be seen in the film. Sagan's son, Nick Sagan, wrote several episodes in the "Star Trek" franchise. In an episode of "" entitled "Terra Prime", a quick shot is shown of the relic rover "Sojourner", part of the "Mars Pathfinder" mission, placed by a historical marker at Carl Sagan Memorial Station on the Martian surface. The marker displays a quote from Sagan: "Whatever the reason you're on Mars, I'm glad you're there, and I wish I was with you." Sagan's student Steve Squyres led the team that landed the rovers "Spirit" and "Opportunity" successfully on Mars in 2004. In September 2008, a musical compositor Benn Jordan released his album "Pale Blue Dot" as a tribute to Carl Sagan's life. Beginning in 2009, a musical project known as Symphony of Science sampled several excerpts of Sagan from his series "Cosmos" and remixed them to electronic music. To date, the videos have received over 21 million views worldwide on YouTube. The 2014 Swedish science fiction short film "Wanderers" uses excerpts of Sagan's narration in 1994 of his book "Pale Blue Dot", played over digitally-created visuals of humanity's possible future expansion into outer space. In February 2015, the Finnish-based symphonic metal band Nightwish released the song "Sagan" as a non-album bonus track for their single "Élan." The song, written by the band's songwriter/composer/keyboardist Tuomas Holopainen, is an homage to the life and work of the late Carl Sagan. In August 2015, it was announced that a biopic of Sagan's life was being planned by Warner Bros. In 2022, the audiobook recording of Sagan's 1994 book "Pale Blue Dot" was selected by the U.S. Library of Congress for inclusion in the National Recording Registry for being "culturally, historically, or aesthetically significant." In 2023, a movie "Voyagers" by Sebastián Lelio was announced with Sagan played by Andrew Garfield and with Daisy Edgar-Jones playing Sagan's third wife, Ann Druyan. Recordings and archival video of Sagan were used extensively in two 2025 films, "Elio" and "The Life of Chuck".
6827
202394
https://en.wikipedia.org/wiki?curid=6827
Cuban Missile Crisis
The Cuban Missile Crisis, also known as the October Crisis () in Cuba, or the Caribbean Crisis (), was a 13-day confrontation between the governments of the United States and the Soviet Union, when American deployments of nuclear missiles in Italy and Turkey were matched by Soviet deployments of nuclear missiles in Cuba. The crisis lasted from 16to28 October 1962. The confrontation is widely considered the closest the Cold War came to escalating into full-scale nuclear war. In 1961, the US government put Jupiter nuclear missiles in Italy and Turkey. It had trained a paramilitary force of expatriate Cubans, which the CIA led in an attempt to invade Cuba and overthrow its government. Starting in November of that year, the US government engaged in a violent campaign of terrorism and sabotage in Cuba, referred to as the Cuban Project, which continued throughout the first half of the 1960s. The Soviet administration was concerned about a Cuban drift towards China, with which the Soviets had an increasingly fractious relationship. In response to these factors the Soviet and Cuban governments agreed, at a meeting between leaders Nikita Khrushchev and Fidel Castro in July 1962, to place nuclear missiles on Cuba to deter a future US invasion. Construction of launch facilities started shortly thereafter. A U-2 spy plane captured photographic evidence of medium- and long-range launch facilities in October. US president John F. Kennedy convened a meeting of the National Security Council and other key advisers, forming the Executive Committee of the National Security Council (EXCOMM). Kennedy was advised to carry out an air strike on Cuban soil in order to compromise Soviet missile supplies, followed by an invasion of the Cuban mainland. He chose a less aggressive course in order to avoid a declaration of war. On 22 October, Kennedy ordered a naval blockade to prevent further missiles from reaching Cuba. He referred to the blockade as a "quarantine", not as a blockade, so the US could avoid the formal implications of a state of war. An agreement was eventually reached between Kennedy and Khrushchev. The Soviets would dismantle their offensive weapons in Cuba, subject to United Nations verification, in exchange for a US public declaration and agreement not to invade Cuba again. The United States secretly agreed to dismantle all of the offensive weapons it had deployed to Turkey. There has been debate on whether Italy was also included in the agreement. While the Soviets dismantled their missiles, some Soviet bombers remained in Cuba, and the United States kept the naval quarantine in place until 20 November 1962. The blockade was formally ended on 20 November after all offensive missiles and bombers had been withdrawn from Cuba. The evident necessity of a quick and direct communication line between the two powers resulted in the Moscow–Washington hotline. A series of agreements later reduced US–Soviet tensions for several years. The compromise embarrassed Khrushchev and the Soviet Union because the withdrawal of US missiles from Italy and Turkey was a secret deal between Kennedy and Khrushchev, and the Soviets were seen as retreating from a situation that they had started. Khrushchev's fall from power two years later was in part because of the Soviet Politburo's embarrassment at both Khrushchev's eventual concessions to the US and his ineptitude in precipitating the crisis. According to the Soviet ambassador to the United States, Anatoly Dobrynin, the top Soviet leadership took the Cuban outcome as "a blow to its prestige bordering on humiliation". Background. Cuba–Soviet relations. In late 1961, Fidel Castro asked for more SA-2 anti-aircraft missiles from the Soviet Union. The request was not acted upon by the Soviet leadership. In the interval, Castro began criticizing the Soviets for lack of "revolutionary boldness", and began talking to China about agreements for economic assistance. In March 1962, Castro ordered the ousting of Anibal Escalante and his pro-Moscow comrades from Cuba's Integrated Revolutionary Organizations. This affair alarmed the Soviet leadership and raised fears of a possible US invasion. As a result, the Soviet Union sent more SA-2 anti-aircraft missiles in April, as well as a regiment of regular Soviet troops. Historian Timothy Naftali writes that Escalante's dismissal was a motivating factor behind the Soviet decision to place nuclear missiles in Cuba in 1962. According to Naftali, Soviet foreign policy planners were concerned that Castro's break with Escalante foreshadowed a Cuban drift toward China, and they sought to solidify the Soviet-Cuban relationship through the missile basing program. Cuba–US relations. The Cuban government regarded US imperialism as the primary explanation for the island's structural weaknesses. The US government had provided weapons, money, and its authority to the military dictatorship of Fulgencio Batista that ruled Cuba until 1958. The majority of the Cuban population had tired of the severe socioeconomic problems associated with the US domination of the country. The Cuban government was thus aware of the necessity of ending the turmoil and incongruities of US-dominated prerevolution Cuban society. It determined that the US government's demands, part of their hostile reaction to Cuban government policy, were unacceptable. With the end of World War II and the start of the Cold War, the US government sought to promote private enterprise as an instrument for advancing US strategic interests in the developing world. It had grown concerned about the expansion of communism. In December 1959, under the Eisenhower administration and less than twelve months after the Cuban Revolution, the Central Intelligence Agency (CIA) developed a plan for paramilitary action against Cuba. The CIA recruited operatives on the island to carry out terrorism and sabotage, kill civilians, and cause economic damage. At the initiative of the CIA Deputy Director for Plans, Richard Bissell, and approved by the new President John F. Kennedy, the US launched the attempted Bay of Pigs Invasion in April 1961 using CIA-trained forces of Cuban expatriates. The complete failure of the invasion, and the exposure of the US government's role before the operation began, was a source of diplomatic embarrassment for the Kennedy administration. Former President Eisenhower told Kennedy that "the failure of the Bay of Pigs will embolden the Soviets to do something that they would otherwise not do." Following the failed invasion, the US massively escalated its sponsorship of terrorism against Cuba. Starting in late 1961, using the military and the CIA, the US government engaged in an extensive campaign of state-sponsored terrorism against civilian and military targets on the island. The terrorist attacks killed significant numbers of civilians. The US armed, trained, funded and directed the terrorists, most of whom were Cuban expatriates. Terrorist attacks were planned at the direction, and with the participation, of US government employees and launched from US territory. In January 1962, US Air Force General Edward Lansdale described the plans to overthrow the Cuban government in a top-secret report, addressed to Kennedy and officials involved with Operation Mongoose. CIA agents or "pathfinders" from the Special Activities Division were to be infiltrated into Cuba to carry out sabotage and organization, including radio broadcasts. In February 1962, the US launched an embargo against Cuba, and Lansdale presented a 26-page, top-secret timetable for implementation of the overthrow of the Cuban government, mandating guerrilla operations to begin in August and September. "Open revolt and overthrow of the Communist regime" was hoped by the planners to occur in the first two weeks of October. The terrorism campaign and the threat of invasion were crucial factors in the Soviet decision to place nuclear missiles on Cuba, and in the Cuban government's decision to accept. The US government was aware at the time, as reported to the president in a National Intelligence Estimate, that the invasion threat was a key reason for the increased Soviet military presence. US–Soviet relations. When Kennedy ran for president in 1960, one of his key election issues was an alleged "missile gap" with the Soviets. In fact the US at that time was ahead of the Soviets and by an increasingly wide margin. In 1961 the Soviets had four R-7 Semyorka intercontinental ballistic missiles (ICBMs); by October 1962, some intelligence estimates indicated a figure of 75. The US had 170 ICBMs and was quickly building more. It also had eight - and ballistic missile submarines, with the capability to launch 16 Polaris missiles, each with a range of . The Soviet First Secretary, Nikita Khrushchev, increased the perception of a 'missile gap' when he boasted to the world that the Soviets were building missiles "like sausages", but Soviet missile numbers and capabilities were nowhere close to his assertions. The Soviet Union had medium-range ballistic missiles in quantity, about 700, but they were unreliable and inaccurate. The US had a considerable advantage in total number of nuclear warheads (27,000 against 3,600) and in the technology required for accurate delivery. The US also led in missile defensive capabilities, naval and air power. The Soviets had a two-to-one advantage in conventional ground forces, particularly in field guns and tanks in the European theatre. Khrushchev also thought Kennedy was weak. This impression was confirmed by the President's response during the Berlin Crisis of 1961, particularly to the building of the Berlin Wall by East Germany to prevent its citizens from emigrating to the West. The half-hearted nature of the Bay of Pigs invasion reinforced his impression that Kennedy was indecisive and, as one Soviet aide wrote, "too young, intellectual, not prepared well for decision making in crisis situations... too intelligent and too weak". Speaking to Soviet officials in the aftermath of the crisis, Khrushchev said, "I know for certain that Kennedy doesn't have a strong background, nor, generally speaking, does he have the courage to stand up to a serious challenge." He told his son Sergei that on Cuba, Kennedy "would make a fuss, make more of a fuss, and then agree". Prelude. Conception. In May 1962, Soviet First Secretary Nikita Khrushchev decided to counter the growing lead of the US in developing and deploying strategic missiles by placing Soviet intermediate-range nuclear missiles in Cuba, despite the misgivings of the Soviet Ambassador in Havana, Alexandr Ivanovich Alexeyev, who argued that Castro would not accept them. Khrushchev faced a strategic situation in which the US was perceived to have a "splendid first strike" capability that put the Soviet Union at a disadvantage. In 1962, the Soviets had only 20 ICBMs capable of delivering nuclear warheads to the US from inside the Soviet Union. Their poor accuracy and reliability raised serious doubts about their effectiveness. A newer, more reliable generation of Soviet ICBMs only became operational after 1965. Soviet nuclear capability in 1962 placed less emphasis on ICBMs than on medium and intermediate-range ballistic missiles (MRBMs and IRBMs) which could strike American allies and most of Alaska from Soviet territory, but not the contiguous United States. As Graham Allison, the director of Harvard University's Belfer Center for Science and International Affairs, pointed out, "The Soviet Union could not right the nuclear imbalance by deploying new ICBMs on its own soil. In order to meet the threat it faced in 1962, 1963, and 1964, it had very few options. Moving existing nuclear weapons to locations from which they could reach American targets was one." A second reason that Soviet missiles were deployed to Cuba was that Khrushchev wanted to bring West Berlin, which was controlled by the American, British and French within Communist East Germany, into the Soviet orbit. The East Germans and Soviets considered western control over a portion of Berlin to be a threat to East Germany. Khrushchev made West Berlin the central battlefield of the Cold War. He believed that if the US did nothing over the deployments of missiles in Cuba, he could force the West out of Berlin by using the missiles as a deterrent to western countermeasures in Berlin. If the US tried to bargain with the Soviets after it became aware of them, Khrushchev could demand a trade of the missiles for West Berlin. Since Berlin was strategically more important than Cuba, the trade would be a win for Khrushchev, as Kennedy recognized: "The advantage is, from Khrushchev's point of view, he takes a great chance but there are quite some rewards to it." Thirdly, it seemed from the perspective both of the Soviet Union and of Cuba that the United States wanted to invade Cuba or increase its presence there. In view of actions which included an attempt to expel Cuba from the Organization of American States, a campaign of violent terrorist attacks on civilians which the US was carrying out on Cuba, economic sanctions against the country and an earlier attempt to invade the island, Cuban officials understood that America was trying to overrun their country. The USSR would respond by placing missiles on Cuba, which would secure the country against attack and keep it in the Socialist Bloc. American missiles could have been launched from Turkey to attack the USSR before the Soviets had a chance to react. Placing nuclear missiles on Cuba would have created a balance of mutual assured destruction. If the United States launched a nuclear strike against the Soviet Union, the Soviets would have been able to react by launching a retaliatory nuclear strike against the US. Placing nuclear missiles on Cuba was also a way for the USSR to show support for Cuba and the Cuban people who viewed the United States as a threat. The USSR had become Cuba's ally after the Cuban Revolution of 1959. According to Khrushchev, the Soviet Union's motives were "aimed at allowing Cuba to live peacefully and develop as its people desire". Arthur M. Schlesinger Jr., a historian and adviser to Kennedy, told National Public Radio in an interview on 16 October 2002 that Castro did not want the missiles, but Khrushchev pressured him to accept them. Castro was not completely happy with the idea, but the Cuban National Directorate of the Revolution accepted them, both to protect Cuba against US attack and to aid the Soviet Union. Soviet military deployments. In early 1962, a group of Soviet military and missile construction specialists accompanied an agricultural delegation to Havana and met Cuban prime minister Fidel Castro. According to one report, the Cuban leadership expected that the US would invade Cuba again and enthusiastically approved the idea of installing nuclear missiles on Cuba. According to another source, Castro objected to being made to look like a Soviet puppet, but was persuaded that missiles in Cuba would be an irritant to the US and would help the interests of the entire socialist camp. The deployment would include short-range tactical weapons with a range of 40 km, usable only against naval vessels, that would provide a "nuclear umbrella" for attacks upon the island. By May, Khrushchev and Castro agreed to place strategic nuclear missiles secretly in Cuba. Like Castro, Khrushchev felt that a US invasion of Cuba was imminent and that to lose Cuba would do great harm to the communists, especially in Latin America. He said he wanted to confront the Americans "with more than words... the logical answer was missiles". The Soviets maintained their tight secrecy, writing their plans in longhand, which were approved by Marshal of the Soviet Union Rodion Malinovsky on 4 July and Khrushchev on 7 July. The Soviets' operation entailed elaborate denial and deception, known as "maskirovka". All the planning and preparation for transporting and deploying the missiles was carried out in the utmost secrecy, with only a very few knowing the exact nature of the mission. Even the troops detailed for the mission were given misdirection by being told that they were headed for a cold region and were outfitted with ski boots, fleece-lined parkas, and other winter equipment. The Soviet code-name was Operation Anadyr. The Anadyr River flows into the Bering Sea, and Anadyr is also the capital of Chukotsky District and a bomber base in the far eastern region. All these measures were intended to conceal the program. Specialists in missile construction, under the guise of machine operators and agricultural specialists, arrived in July. A total of 43,000 foreign troops would ultimately be brought in. Chief Marshal of Artillery Sergey Biryuzov, Head of the Soviet Rocket Forces, led a survey team that visited Cuba. He told Khrushchev that the missiles would be concealed and camouflaged by palm trees. The Soviet troops would arrive in Cuba heavily underprepared. They did not know that the tropical climate would render ineffective many of their weapons and much of their equipment. In the first few days of setting up the missiles, troops complained of fuse failures, excessive corrosion, overconsumption of oil, and generator blackouts. As early as August 1962, the US suspected that the Soviets were building missile facilities in Cuba. During that month, its intelligence services gathered information of sightings by ground observers of Soviet-built MiG-21 fighters and Il-28 light bombers. U-2 spy planes found S-75 Dvina (NATO designation "SA-2") surface-to-air missile sites at eight different locations. CIA director John A. McCone was suspicious. Sending antiaircraft missiles into Cuba, he reasoned, "made sense only if Moscow intended to use them to shield a base for ballistic missiles aimed at the United States". On 10 August, he wrote a memo to Kennedy in which he guessed that the Soviets were preparing to introduce ballistic missiles into Cuba. Che Guevara himself traveled to the Soviet Union on 30 August 1962, to sign the final agreement regarding the deployment of missiles in Cuba. The visit was heavily monitored by the CIA as Guevara was being watched closely by American intelligence. While in the Soviet Union, Guevara argued with Khrushchev that the missile deal should be made public but Khrushchev insisted on total secrecy, and promised the Soviet Union's support if the Americans discovered the missiles. By the time Guevara arrived in Cuba, U-2 spy planes had already discovered the Soviet troops in Cuba. With important Congressional elections scheduled for November, the crisis became enmeshed in American politics. On 31 August, Senator Kenneth Keating (R-New York) warned on the Senate floor that the Soviet Union was "in all probability" constructing a missile base in Cuba. He charged the Kennedy administration with covering up a major threat to the US, thereby starting the crisis. He may have received this initial "remarkably accurate" information from his friend, former congresswoman and ambassador Clare Boothe Luce, who in turn received it from Cuban exiles. A later confirming source for Keating's information may have been the West German ambassador to Cuba, who had received information from dissidents inside Cuba that Soviet troops had arrived in Cuba in early August and were seen working "in all probability on or near a missile base". The ambassador passed this information to Keating on a trip to Washington in early October. Air Force General Curtis LeMay presented a pre-invasion bombing plan to Kennedy in September, and spy flights and minor military harassment from US forces at Guantanamo Bay Naval Base were the subject of continual Cuban diplomatic complaints to the US government. The first consignment of Soviet R-12 missiles arrived on the night of 8 September, followed by a second on 16 September. The R-12 was a medium-range ballistic missile capable of carrying a thermonuclear warhead. It was a single-stage, road-transportable, surface-launched, storable liquid propellant-fuelled missile that could deliver a megaton-class nuclear weapon. The Soviets were building nine sites, six for R-12 medium-range missiles (NATO designation "SS-4 Sandal") with an effective range of and three for R-14 intermediate-range ballistic missiles (NATO designation "SS-5 Skean") with a maximum range of . On 7 October, Cuban President Osvaldo Dorticós Torrado spoke at the UN General Assembly: "If... we are attacked, we will defend ourselves. I repeat, we have sufficient means with which to defend ourselves; we have indeed our inevitable weapons, the weapons, which we would have preferred not to acquire, and which we do not wish to employ." On 11 October in another Senate speech, Sen Keating reaffirmed his earlier warning of 31 August and stated that, "Construction has begun on at least a half dozen launching sites for intermediate range tactical missiles." The Cuban leadership was further upset when on 20 September, the US Senate approved Joint Resolution 230, which stated that the US was determined "to prevent in Cuba the creation or use of an externally-supported military capability endangering the security of the United States". On the same day, the US announced a major military exercise in the Caribbean, PHIBRIGLEX-62, which Cuba denounced as a deliberate provocation and proof that the US planned to invade Cuba. The Soviet leadership believed, based on its perception of Kennedy's lack of confidence during the Bay of Pigs Invasion, that he would avoid confrontation and would accept the missiles as a . On 11 September, the Soviet Union publicly warned that a US attack on Cuba or on Soviet ships that were carrying supplies to the island would mean war. The Soviets continued the "Maskirovka" program to conceal their actions in Cuba. They repeatedly denied that the weapons being brought into Cuba were offensive in nature. On 7 September, Soviet Ambassador to the United States Anatoly Dobrynin assured United States Ambassador to the United Nations Adlai Stevenson that the Soviet Union was supplying only defensive weapons to Cuba. On 11 September, the Telegraph Agency of the Soviet Union (TASS: "Telegrafnoe Agentstvo Sovetskogo Soyuza") announced that the Soviet Union had no need or intention to introduce offensive nuclear missiles into Cuba. On 13 October, Dobrynin was questioned by former Undersecretary of State Chester Bowles about whether the Soviets planned to put offensive weapons in Cuba. He denied any such plans. On 17 October, Soviet embassy official Georgi Bolshakov brought President Kennedy a personal message from Khrushchev reassuring him that "under no circumstances would surface-to-surface missiles be sent to Cuba." Missiles reported. Missiles placed in Cuba would enable the Soviets to target most of the continental US. The planned arsenal consisted of forty launchers. The Cuban populace observed the arrival and deployment of the missiles and hundreds of reports reached Miami. US intelligence received countless reports, many of dubious quality or even laughable, most of which could be dismissed as describing defensive missiles. Only five reports bothered the analysts. They described large trucks passing through towns at night that were carrying very long canvas-covered cylindrical objects and could not make turns through towns without backing up and maneuvering. Defensive missile transporters, it was believed, could make such turns without undue difficulty. The reports could not be satisfactorily dismissed. Aerial confirmation. The United States had been sending U-2 surveillance flights over Cuba since the failed Bay of Pigs Invasion. A pause in reconnaissance flights occurred on 30 August 1962 when a U-2 operated by the US Air Force's Strategic Air Command flew over Sakhalin Island in the Soviet Far East by mistake. The Soviets lodged a protest and the US apologized. Nine days later, a Taiwanese-operated U-2 was lost over western China to an SA-2 surface-to-air missile (SAM). US officials were worried that one of the Cuban or Soviet SAMs in Cuba might shoot down a CIA U-2, causing another international incident. In a meeting with members of the Committee on Overhead Reconnaissance (COMOR) on 10 September 1962, Secretary of State Dean Rusk and National Security Advisor McGeorge Bundy restricted further U-2 flights over Cuban airspace. The resulting lack of coverage over the island for the next five weeks became known to historians as the "Photo Gap". No significant U-2 coverage was achieved over the interior of the island during this time. US officials attempted to use a Corona photo-reconnaissance satellite to photograph reported Soviet military deployments, but the imagery acquired over western Cuba by a Corona KH-4 mission on 1 October 1962 was obscured by clouds and haze and did not provide usable intelligence. At the end of September, Navy reconnaissance aircraft photographed the Soviet ship "Kasimov" with large crates on its deck the size and shape of Il-28 jet bomber fuselages. In September 1962, analysts from the Defense Intelligence Agency (DIA) noticed that Cuban surface-to-air missile sites were arranged in a pattern similar to those used by the Soviet Union to protect ICBM bases, and the DIA lobbied for resumption of U-2 flights over the island. In the past the flights had been conducted by the CIA, but pressure from the Defense Department led to that authority being transferred to the Air Force. After the loss of a CIA U-2 over the Soviet Union in May 1960, it was thought that if another U-2 were shot down, an Air Force aircraft apparently being used for a legitimate military purpose would be easier to explain than a CIA flight. When reconnaissance missions were permitted again, on 9 October 1962, poor weather kept the planes from flying. The US first obtained U-2 photographic evidence of the Soviet missiles on 14 October 1962, when a U-2 flight piloted by Major Richard Heyser took 928 pictures on a path selected by DIA analysts, capturing images of what turned out to be an SS-4 construction site at San Cristóbal, Pinar del Río Province (now in Artemisa Province), in western Cuba. President notified. On 15 October 1962, the CIA's National Photographic Interpretation Center (NPIC) reviewed the U-2 photographs and identified objects that appeared to be medium range ballistic missiles. This identification was made partly on the strength of reporting provided by Oleg Penkovsky, a double agent in the GRU working for the CIA and MI6. Although he provided no direct reports of Soviet missile deployments to Cuba, technical and doctrinal details of Soviet missile regiments that had been provided by Penkovsky in the months and years prior to the crisis helped NPIC analysts to identify the missiles in U-2 imagery. That evening, the CIA notified the Department of State and at 8:30pm EDT, Bundy chose to wait until the next morning to tell the President. McNamara was briefed at midnight. The next morning, Bundy showed Kennedy the U-2 photographs and briefed him on the CIA's analysis of the images. At 6:30 pm EDT, Kennedy convened a meeting of the nine members of the National Security Council and five other key advisers, in a group he named the Executive Committee of the National Security Council (EXCOMM) after the fact on 22 October by National Security Action Memorandum 196. Without informing the members of EXCOMM, President Kennedy tape-recorded all of their proceedings, and Sheldon M. Stern, head of the Kennedy library transcribed some of them. On 16 October, President Kennedy notified Attorney General Robert Kennedy that he was convinced the Soviets were placing missiles on Cuba, that it was a legitimate threat and that the possibility of nuclear destruction by two world superpowers had become a reality. Robert Kennedy responded by contacting the Soviet Ambassador, Anatoly Dobrynin. Robert Kennedy expressed his "concern about what was happening" and Dobrynin "was instructed by Soviet Chairman Nikita S. Khrushchev to assure President Kennedy that there would be no ground-to-ground missiles or offensive weapons placed in Cuba". Khrushchev further assured Kennedy that the Soviet Union had no intention of "disrupting the relationship of our two countries" despite the photo evidence presented before President Kennedy. Responses considered. The US had no plan for a response in place because it had never expected that the Soviets would install nuclear missiles on Cuba. EXCOMM discussed several possible courses of action: The Joint Chiefs of Staff unanimously agreed that a full-scale attack and invasion was the only solution. They believed that the Soviets would not attempt to stop the US from conquering Cuba. Kennedy was skeptical: Kennedy concluded that attacking Cuba by air would signal the Soviets to presume "a clear line" to conquer Berlin. Kennedy also believed that US allies would think of the country as "trigger-happy cowboys" who lost Berlin because they could not peacefully resolve the Cuban situation. EXCOMM considered the effect on the strategic balance of power, both political and military. The Joint Chiefs of Staff believed that the missiles would seriously alter the military balance, but McNamara disagreed. An extra 40, he reasoned, would make little difference to the overall strategic balance. The US already had approximately 5,000 strategic warheads, but the Soviet Union had only 300. McNamara concluded that the Soviets having 340 would not therefore substantially alter the strategic balance. In 1990, he reiterated that "it made "no" difference... The military balance wasn't changed. I didn't believe it then, and I don't believe it now." It was agreed that the missiles would affect the "political" balance. Kennedy had explicitly promised the American people less than a month before the crisis that "if Cuba should possess a capacity to carry out offensive actions against the United States... the United States would act." Further, US credibility among its allies and people would be damaged if the Soviet Union appeared to redress the strategic imbalance by placing missiles in Cuba. Kennedy explained after the crisis that "it would have politically changed the balance of power. It would have appeared to, and appearances contribute to reality." On 18 October 1962, Kennedy met Soviet Minister of Foreign Affairs Andrei Gromyko, who claimed that the weapons were for defensive purposes only. Not wanting to expose what he already knew and to avoid panicking the American public, Kennedy did not reveal that he was already aware of the missile buildup. Operational plans. Two Operational Plans (OPLAN) were considered. OPLAN 316 envisioned a full invasion of Cuba by Army and Marine units, supported by the Navy, following Air Force and naval airstrikes. Army units in the US would have had difficulty fielding mechanised and logistical assets, and the US Navy could not supply enough amphibious shipping to transport even a modest armoured contingent from the Army. OPLAN 312, primarily an Air Force and Navy carrier operation, was designed with enough flexibility to do anything from engaging individual missile sites to providing air support for OPLAN 316's ground forces. Blockade. Kennedy conferred with members of EXCOMM and other top advisers throughout 21 October and considered the two remaining options: an air strike primarily against the Cuban missile bases or a naval blockade of Cuba. A full-scale invasion was not the administration's first option. McNamara supported the naval blockade as a strong but limited military action that would leave the US in control. The term "blockade" was problematic – according to international law, a blockade is an act of war, but the Kennedy administration did not think that the Soviets would be provoked to attack by a mere blockade. Legal experts at the State Department and Justice Department concluded that a declaration of war could be avoided if another legal justification, based on the Rio Treaty for defence of the Western Hemisphere, was obtained from a resolution by a two-thirds vote from the members of the Organization of American States (OAS). Admiral George Anderson, Chief of Naval Operations wrote a position paper that helped Kennedy to differentiate between what they termed a "quarantine" of offensive weapons and a blockade of all materials, claiming that a classic blockade was not the original intention. Since it would take place in international waters, Kennedy obtained the approval of the OAS for military action under the hemispheric defence provisions of the Rio Treaty: On 19 October, the EXCOMM formed separate working groups to examine the air strike and blockade options, and by the afternoon most support in the EXCOMM had shifted to a blockade. Reservations about the plan continued to be voiced as late as 21 October, the paramount concern being that once the blockade was put into effect, the Soviets would rush to complete some of the missiles and the US could find itself bombing operational missiles if the blockade had not already forced their removal. Speech to the nation. At 3:00 pm EDT on 22 October 1962, President Kennedy formally established the executive committee (EXCOMM) with National Security Action Memorandum (NSAM) 196. At 5:00 pm, he met Congressional leaders, who opposed a blockade and demanded a stronger response. In Moscow, US Ambassador Foy D. Kohler briefed Khrushchev on the pending blockade and Kennedy's speech to the nation. Ambassadors around the world gave notice to non-Eastern Bloc leaders. Before the speech, US delegations met Canadian Prime Minister John Diefenbaker, British Prime Minister Harold Macmillan, West German Chancellor Konrad Adenauer, French President Charles de Gaulle and Secretary-General of the Organization of American States, José Antonio Mora to brief them on this intelligence and the US's proposed response. All were supportive of the US position. Over the course of the crisis, Kennedy had daily telephone conversations with Macmillan, who was publicly supportive of US actions. Shortly before his speech, Kennedy telephoned former President Dwight Eisenhower. Kennedy's conversation with the former president also revealed that the two had been consulting during the Cuban Missile Crisis. The two also anticipated that Khrushchev would respond to the Western world in a manner similar to his response during the Suez Crisis, and would possibly wind up trading off Berlin. At 7:00 pm EDT on 22 October, Kennedy delivered a nationwide televised address on all of the major networks announcing the discovery of the missiles. He noted: Kennedy described the administration's plan: During the speech, a directive went out to all US forces worldwide, placing them on DEFCON 3. The heavy cruiser was the designated flagship for the blockade, with as "Newport News"s destroyer escort. Kennedy's speech writer Ted Sorensen stated in 2007 that the address to the nation was "Kennedy's most important speech historically, in terms of its impact on our planet." Crisis deepens. On 23 October 1962, US Air Force RF-101A/C Voodoos and US Navy RF-8A Crusaders began flying extremely hazardous low-level photo reconnaissance missions over Cuba. Only once did the Cuban Air Force scramble a MiG-19 to attempt a shoot-down, but the attempt was unsuccessful. At 11:24 am EDT on 24 October, a cable from US Under Secretary of State George Ball to the US Ambassadors in Turkey and NATO notified them that they were considering making an offer to withdraw missiles from Italy and Turkey in exchange for Soviet withdrawal from Cuba. Turkish officials replied that they would "deeply resent" any trade involving the US missile presence in their country. One day later, on the morning of 25 October, American journalist Walter Lippmann proposed the same thing in his syndicated column. Castro reaffirmed Cuba's right to self-defense and said that all of its weapons were defensive and Cuba would not allow an inspection. International response. In West Germany, newspapers supported the US response by contrasting it with the weak American actions in the region during the preceding months. They also expressed some fear that the Soviets might retaliate in Berlin. In France on 23 October, the crisis made the front page of all the daily newspapers. The next day, an editorial in "Le Monde" expressed doubt about the authenticity of the CIA's photographic evidence. Two days later, after a visit by a high-ranking CIA agent, the newspaper accepted the validity of the photographs. On 24 October, Pope John XXIII sent a message to the Soviet embassy in Rome, to be transmitted to the Kremlin, in which he voiced his concern for peace. In this message he stated, "We beg all governments not to remain deaf to this cry of humanity. That they do all that is in their power to save peace." Three days after Kennedy's speech, the Chinese "People's Daily" announced that "650,000,000 Chinese men and women were standing by the Cuban people." In the 29 October issue of "Le Figaro", Raymond Aron wrote in support of the American response. Soviet broadcast and communications. The crisis continued unabated, and on the evening of 24 October 1962, the Soviet TASS news agency broadcast a telegram from Khrushchev to Kennedy in which Khrushchev warned that the United States' "outright piracy" would lead to war. Khrushchev then sent at 9:24 pm a telegram to Kennedy, which was received at 10:52 pm EDT. Khrushchev stated, "if you weigh the present situation with a cool head without giving way to passion, you will understand that the Soviet Union cannot afford not to decline the despotic demands of the USA". The Soviet Union viewed the blockade as "an act of aggression" and their ships would be instructed to ignore it. After 23 October, Soviet communications with the US increasingly showed indications of having been rushed. Undoubtedly a result of pressure, it was not uncommon for Khrushchev to repeat himself and to send messages lacking basic editing. With President Kennedy making known his aggressive intentions of a possible airstrike followed by an invasion on Cuba, Khrushchev sought a diplomatic compromise. Communications between the two superpowers had entered a new and revolutionary period, with the threat of mutual destruction now accompanying the deployment of nuclear weapons. US alert level raised. The US requested an emergency meeting of the United Nations Security Council on 25 October and Ambassador to the United Nations, Adlai Stevenson, confronted Soviet Ambassador Valerian Zorin and challenged him to admit the existence of the missiles. Ambassador Zorin refused to answer. At 10:00 pm EDT the next day, the US raised the readiness level of Strategic Air Command (SAC) forces to DEFCON 2. For the only confirmed time in US history, B-52 bombers were put on continuous airborne alert. B-47 medium bombers were dispersed to military and civilian airfields and made ready to take off, fully equipped, at 15 minutes' notice. One-eighth of SAC's 1,436 bombers were on airborne alert. Some 145 intercontinental ballistic missiles, some of which targeted Cuba, were placed on alert. Air Defense Command (ADC) redeployed 161 nuclear-armed interceptors to 16 dispersal fields within nine hours, with one third on 15-minute alert status. Twenty-three nuclear-armed B-52 bombers were sent to orbit points within striking distance of the Soviet Union to demonstrate that the US was serious. Jack J. Catton later estimated that about 80 per cent of SAC's planes were ready for launch during the crisis. David A. Burchinal recalled that, by contrast: By 22 October, Tactical Air Command (TAC) had 511 fighters plus supporting tankers and reconnaissance aircraft deployed to face Cuba on one-hour alert status. TAC and the Military Air Transport Service had problems: the concentration of aircraft in Florida strained command and support echelons, which were facing critical undermanning in security, armaments, and communications. Absence of permission to use war-reserve stocks of conventional munitions forced TAC to scrounge supplies, and the lack of airlift assets to support a major airborne drop necessitated the call-up of 24 reserve squadrons. On 25 October at 1:45 am EDT, Kennedy responded to Khrushchev's telegram by stating that the US was forced into action after receiving repeated false assurances that no offensive missiles were being placed in Cuba. Deployment of the missiles "required the responses I have announced... I hope that your government will take necessary action to permit a restoration of the earlier situation." Blockade challenged. At 7:15 am EDT on 25 October, and attempted to intercept "Bucharest" but failed to do so. Fairly certain that the tanker did not contain any military material, the US allowed it through the blockade. Later that day, at 5:43 pm, the commander of the blockade ordered the destroyer to intercept and board the Lebanese freighter "Marucla". That took place the next day, and "Marucla" was cleared through the blockade after its cargo was checked. At 5:00 pm EDT on 25 October, William Clements announced that the missiles in Cuba were still being worked on. This was later verified by a CIA report that suggested there had been no slowdown. In response, Kennedy issued Security Action Memorandum 199, authorizing the loading of nuclear weapons onto aircraft under the command of SACEUR, which had the duty of carrying out first air strikes on the Soviet Union. Kennedy claimed that the blockade had succeeded when the USSR turned back fourteen ships presumed to be carrying offensive weapons. The first indication of this was in a report from British GCHQ sent to the White House Situation Room which contained intercepted communications from Soviet ships reporting their positions. On 24 October, "Kislovodsk," a Soviet cargo ship, reported a position north-east of where it had been 24 hours earlier, indicating it had "discontinued" its voyage and turned back towards the Baltic. The next day, further reports showed that more ships originally bound for Cuba had altered their course. Raising the stakes. The next morning, 26 October, Kennedy informed EXCOMM that he believed only an invasion would remove the missiles from Cuba. He was persuaded to wait and continue with military and diplomatic pressure. He agreed and ordered low-level flights over the island to be increased from two per day to every two hours. He also ordered a crash program to institute a new civil government in Cuba if an invasion went ahead. At this point the crisis appeared to be at a stalemate. The Soviets had shown no indication that they would back down and had made public media and private inter-government statements to that effect. The US had no reason to disbelieve them and was in the early stages of preparing an invasion of Cuba and a nuclear strike on the Soviet Union if it responded militarily, which the US assumed it would. Kennedy had no intention of keeping these plans secret, and with an array of Cuban and Soviet spies present Khrushchev was made aware of them. The implicit threat of air strikes on Cuba followed by an invasion allowed the United States to exert pressure in future talks, and the prospect of military action helped to accelerate Khrushchev's proposal for a compromise. Throughout the closing stages of October 1962, Soviet communications to the United States became increasingly defensive, and Khrushchev's tendency to use poorly phrased and ambiguous language during negotiations increased the United States' confidence and clarity in messaging. Leading Soviet figures failed to mention that only the Cuban government could agree to inspections of the territory, and continued to make arrangements relating to Cuba without Castro's knowledge. According to Dean Rusk, Khrushchev "blinked": he began to panic from the consequences of his own plan, and this was reflected in the tone of Soviet messages. This allowed the US to dominate negotiations in late October. The escalating situation also caused Khrushchev to abandon plans for a possible Warsaw Pact invasion of Albania, which was being discussed in the Eastern Bloc following the Vlora incident the previous year. Secret negotiations. At 1:00 pm EDT on 26 October, John A. Scali of ABC News met Aleksandr Fomin, the cover name of Alexander Feklisov, the KGB station chief in Washington, at Fomin's request. Following the instructions of the Politburo of the CPSU, Fomin noted, "War seems about to break out." He asked Scali to use his contacts to talk to his "high-level friends" at the State Department to see if the US would be interested in a diplomatic solution. He suggested that the language of the deal would contain an assurance from the Soviet Union to remove the weapons under UN supervision and that Castro would publicly announce that he would not accept such weapons again, in exchange for a public statement by the US that it would not invade Cuba. The US responded by asking the Brazilian government to pass a message to Castro that the US would be "unlikely to invade" if the missiles were removed. At 6:00 pm EDT on 26 October, the State Department started receiving a message that appeared to be written personally by Khrushchev. It was Saturday 2:00 am in Moscow. The long letter took several minutes to arrive, and it took translators additional time to translate and transcribe it. Robert F. Kennedy described the letter as "very long and emotional". Khrushchev reiterated the basic outline that had been stated to Scali earlier in the day: "I propose: we, for our part, will declare that our ships bound for Cuba are not carrying any armaments. You will declare that the United States will not invade Cuba with its troops and will not support any other forces which might intend to invade Cuba. Then the necessity of the presence of our military specialists in Cuba will disappear." At 6:45 pm EDT, news of Fomin's offer to Scali was finally heard and was interpreted as a "set up" for the arrival of Khrushchev's letter. The letter was then considered official and accurate, although it was later learned that Fomin was almost certainly operating without official backing. Additional study of the letter was ordered and continued into the night. Crisis continues. Castro, on the other hand, was convinced that an invasion of Cuba was imminent, and on 26 October he sent a telegram to Khrushchev that appeared to call for a pre-emptive nuclear strike on the US in case of attack. In a 2010 interview, Castro expressed regret about his 1962 stance on first use: "After I've seen what I've seen, and knowing what I know now, it wasn't worth it at all." Castro also ordered all anti-aircraft weapons in Cuba to fire on any US aircraft. Previous orders had been to fire only on groups of two or more. At 6:00 am EDT on 27 October, the CIA delivered a memo reporting that three of the four missile sites at San Cristobal and both sites at Sagua la Grande appeared to be fully operational. It also noted that the Cuban military continued to organise for action but was under order not to act unless attacked. At 9:00 am EDT on 27 October, Radio Moscow began broadcasting a message from Khrushchev. Contrary to the letter of the night before, the message offered a new trade: the missiles on Cuba would be removed in exchange for the removal of the Jupiter missiles from Italy and Turkey. At 10:00 am EDT, the executive committee met again to discuss the situation and came to the conclusion that the change in the message was because of internal debate between Khrushchev and other party officials in the Kremlin. Kennedy realised that he would be in an "insupportable position if this becomes Khrushchev's proposal" because the missiles in Turkey were not militarily useful and were being removed anyway, and "It's gonna – to any man at the United Nations or any other rational man, it will look like a very fair trade." Bundy explained why Khrushchev's public acquiescence could not be considered: "The current threat to peace is not in Turkey, it is in Cuba." McNamara noted that another tanker, the "Grozny", was about out and should be intercepted. He also noted that they had not made the Soviets aware of the blockade line and suggested relaying that information to them via U Thant at the United Nations. While the meeting progressed, at 11:03 am EDT a new message began to arrive from Khrushchev. The message stated, in part: "You are disturbed over Cuba. You say that this disturbs you because it is ninety-nine miles by sea from the coast of the United States of America. But... you have placed destructive missile weapons, which you call offensive, in Italy and Turkey, literally next to us... I therefore make this proposal: We are willing to remove from Cuba the means which you regard as offensive... Your representatives will make a declaration to the effect that the United States... will remove its analogous means from Turkey... and after that, persons entrusted by the United Nations Security Council could inspect on the spot the fulfillment of the pledges made." The executive committee continued to meet through the day. Throughout the crisis, Turkey had repeatedly stated that it would be upset if the Jupiter missiles were removed. Italy's Prime Minister Amintore Fanfani, who was also Foreign Minister "ad interim", offered to allow withdrawal of the missiles deployed in Apulia as a bargaining chip. He gave the message to one of his most trusted friends, Ettore Bernabei, general manager of RAI-TV, to convey to Arthur M. Schlesinger Jr. Bernabei was in New York to attend an international conference on satellite TV broadcasting. On the morning of 27 October, a U-2F (the third CIA U-2A, modified for air-to-air refuelling) piloted by USAF Major Rudolf Anderson, departed its forward operating location at McCoy AFB, Florida. At approximately 12:00 pm EDT, the aircraft was struck by an SA-2 surface-to-air missile launched from Cuba. The aircraft crashed, and Anderson was killed. Stress in negotiations between the Soviets and the US intensified; only later was it assumed that the decision to fire the missile was made locally by an undetermined Soviet commander, acting on his own authority. Later that day, at about 3:41 pm EDT, several US Navy RF-8A Crusader aircraft, on low-level photo-reconnaissance missions, were fired upon. At 4:00 pm EDT, Kennedy recalled members of EXCOMM to the White House and ordered that a message should immediately be sent to U Thant asking the Soviets to suspend work on the missiles while negotiations were carried out. During the meeting, General Maxwell Taylor delivered the news that the U-2 had been shot down. Kennedy had earlier claimed he would order an attack on such sites if fired upon, but he decided to not act unless another attack was made. On 28 October 1962, Khrushchev told his son Sergei that the shooting down of Anderson's U-2 was by the "Cuban military at the direction of Raúl Castro". On 27 October Bobby Kennedy relayed a message to the Soviet Ambassador that President Kennedy was under pressure from the military to use force against Cuba and that "an irreversible chain of events could occur against his will" as "the president is not sure that the military will not overthrow him and seize power". He therefore implored Khrushchev to accept Kennedy's proposed agreement. Forty years later, McNamara said: Daniel Ellsberg said that Robert Kennedy (RFK) told him in 1964 that after the U-2 was shot down and the pilot killed, he (RFK) told Soviet ambassador Dobrynin, "You have drawn first blood ... . [T]he president had decided against advice ... not to respond militarily to that attack, but he [Dobrynin] should know that if another plane was shot at, ... we would take out all the SAMs and anti-aircraft ... . And that would almost surely be followed by an invasion." Drafting response. Emissaries sent by both Kennedy and Khrushchev agreed to meet at the Yenching Palace Chinese restaurant in the Cleveland Park neighbourhood of Washington, DC, on Saturday evening, 27 October. Kennedy suggested taking Khrushchev's offer to trade away the missiles. Unknown to most members of the EXCOMM, but with the support of his brother the president, Robert Kennedy had been meeting the Soviet Ambassador Dobrynin in Washington to discover whether the intentions were genuine. The EXCOMM was against the proposal because it would undermine NATO's authority, and the Turkish government had repeatedly stated that it was against any such trade. As the meeting progressed, a new plan emerged, and Kennedy was slowly persuaded. The new plan called for him to ignore the latest message and instead to return to Khrushchev's earlier one. Kennedy was initially hesitant, feeling that Khrushchev would no longer accept the deal because a new one had been offered, but Llewellyn Thompson argued that it was still possible. White House Special Counsel and Adviser Ted Sorensen and Robert Kennedy left the meeting and returned 45 minutes later, with a draft letter to that effect. The President made several changes, had it typed, and sent it. After the EXCOMM meeting, a smaller meeting continued in the Oval Office. The group argued that the letter should be underscored with an oral message to Dobrynin that stated that if the missiles were not withdrawn, military action would be used to remove them. Rusk added one proviso that no part of the language of the deal would mention Turkey, but there would be an understanding that the missiles would be removed "voluntarily" in the immediate aftermath. The president agreed, and the message was sent. At Rusk's request, Fomin and Scali met again. Scali asked why the two letters from Khrushchev were so different, and Fomin claimed it was because of "poor communications". Scali replied that the claim was not credible and shouted that he thought it was a "stinking double cross". He went on to claim that an invasion was only hours away, and Fomin stated that a response to the US message was expected from Khrushchev shortly and urged Scali to tell the State Department that no treachery was intended. Scali said that he did not think anyone would believe him, but he agreed to deliver the message. The two went their separate ways, and Scali immediately typed out a memo for the EXCOMM. Within the US establishment, it was understood that ignoring the second offer and returning to the first put Khrushchev in a terrible position. Military preparations continued, and all active duty Air Force personnel were recalled to their bases for possible action. Robert Kennedy later recalled the mood: "We had not abandoned all hope, but what hope there was now rested with Khrushchev's revising his course within the next few hours. It was a hope, not an expectation. The expectation was military confrontation by Tuesday [30 October], and possibly tomorrow [29 October] ..." At 8:05 pm EDT, the letter drafted earlier in the day was delivered. The message read, "As I read your letter, the key elements of your proposals—which seem generally acceptable as I understand them—are as follows: 1) You would agree to remove these weapons systems from Cuba under appropriate United Nations observation and supervision; and undertake, with suitable safe-guards, to halt the further introduction of such weapon systems into Cuba. 2) We, on our part, would agree—upon the establishment of adequate arrangements through the United Nations, to ensure the carrying out and continuation of these commitments (a) to remove promptly the quarantine measures now in effect and (b) to give assurances against the invasion of Cuba." The letter was also released directly to the press to ensure it could not be "delayed". With the letter delivered, a deal was on the table. As Robert Kennedy noted, there was little expectation it would be accepted. At 9:00 pm EDT, the EXCOMM met again to review the actions for the following day. Plans were drawn up for air strikes on the missile sites as well as other economic targets, notably petroleum storage. McNamara stated that they had to "have two things ready: a government for Cuba, because we're going to need one; and secondly, plans for how to respond to the Soviet Union in Europe, because sure as hell they're going to do something there". At 12:12 am EDT, on 27 October, the US informed its NATO allies that "the situation is growing shorter... the United States may find it necessary within a very short time in its interest and that of its fellow nations in the Western Hemisphere to take whatever military action may be necessary." To add to the concern, at 6:00 am, the CIA reported that all missiles in Cuba were ready for action. On 27 October, Khrushchev also received a letter from Castro, what is now known as the Armageddon Letter (dated the day before), which urged the use of nuclear force in the event of an attack on Cuba: "I believe the imperialists' aggressiveness is extremely dangerous and if they actually carry out the brutal act of invading Cuba in violation of international law and morality, that would be the moment to eliminate such danger forever through an act of clear legitimate defense, however harsh and terrible the solution would be," Castro wrote. Averted nuclear launch. Later that same day, what the White House later called "Black Saturday", the US Navy dropped a series of "signalling" depth charges ("practice" depth charges the size of hand grenades) on a Soviet submarine () at the blockade line, unaware that it was armed with a nuclear-tipped torpedo that could be launched if the submarine was damaged by depth charges or surface fire. The submarine was too deep to monitor radio traffic and the captain of the "B-59", Valentin Grigoryevich Savitsky, assuming after live ammunition fire at his submarine that a war had started, proposed to launch the nuclear torpedo at the US ships. The decision to launch the "special weapon" normally only required the agreement of the ship's commanding officer and political officer, but the commander of the submarine flotilla, Vasily Arkhipov, was aboard "B-59" and he also had to agree. Arkhipov did not give his consent and the nuclear torpedo was not launched. (These events only became publicly known in 2002. See Submarine close call.) On the same day a U-2 spy plane made an accidental and unauthorised 90-minute overflight of the Soviet Union's far eastern coast. The Soviets responded by scrambling MiG fighters from Wrangel Island; in turn, the Americans launched F-102 fighters armed with nuclear air-to-air missiles over the Bering Sea. Resolution. On Saturday, 27 October, after much deliberation between the Soviet Union and Kennedy's cabinet, Kennedy secretly agreed to remove all missiles in Turkey, on the border of the Soviet Union, and possibly those in southern Italy, in exchange for Khrushchev removing all missiles in Cuba. There is some dispute as to whether removing the missiles from Italy was part of the secret agreement. Khrushchev wrote in his memoirs that it was, and when the crisis had ended McNamara gave the order to dismantle the missiles in both Italy and Turkey. At this point, Khrushchev knew things the US did not. First, that the shooting down of the U-2 by a Soviet missile violated direct orders from Moscow, and Cuban anti-aircraft fire against other US reconnaissance aircraft also violated direct orders from Khrushchev to Castro. Second, the Soviets already had 162 nuclear warheads on Cuba that the US did not know were there. Third, the Soviets and Cubans on the island would almost certainly have responded to an invasion by using them, even though Castro believed that everyone in Cuba would die as a result. Khrushchev also knew, but may not have considered, that he had submarines nearby armed with nuclear weapons of which the US Navy may not have been aware. Khrushchev knew he was losing control. President Kennedy had been told in early 1961 that a nuclear war would probably kill a third of humanity, with most or all of those deaths concentrated in the US, the USSR, Europe and China, and Khrushchev may have received a similar estimate. With this background, when Khrushchev heard of Kennedy's threats as relayed by Robert Kennedy to Soviet Ambassador Dobrynin, he immediately drafted his acceptance of Kennedy's latest terms from his dacha without involving the Politburo, as he had previously, and had them immediately broadcast over Radio Moscow, which he believed the US would hear. In that broadcast at 9:00 am EST, on 28 October 1962, Khrushchev stated that "the Soviet government, in addition to previously issued instructions on the cessation of further work at the building sites for the weapons, has issued a new order on the dismantling of the weapons which you describe as 'offensive' and their crating and return to the Soviet Union." At 10:00 am on 28 October, Kennedy first learned of Khrushchev's solution to the crisis: the US would remove the 15 Jupiters in Turkey and the Soviets would remove the missiles from Cuba. Khrushchev had made the offer in a public statement for the world to hear. Despite almost solid opposition from his senior advisers, Kennedy accepted the Soviet offer. "This is a pretty good play of his," Kennedy said, according to a tape recording that he made secretly of the Cabinet Room meeting. Kennedy had deployed the Jupiters in March 1962, causing a stream of angry outbursts from Khrushchev. "Most people will think this is a rather even trade and we ought to take advantage of it," Kennedy said. Vice President Lyndon Johnson was the first to endorse the missile swap, but others continued to oppose it. Finally, Kennedy ended the debate. "We can't very well invade Cuba with all its toil and blood," Kennedy said, "when we could have gotten them out by making a deal on the same missiles on Turkey. If that's part of the record, then you don't have a very good war." Kennedy immediately responded to Khrushchev's letter, issuing a statement calling it "an important and constructive contribution to peace". He continued this with a formal letter: Kennedy's planned statement would also contain suggestions he had received from his adviser Arthur Schlesinger Jr. in a "Memorandum for the President" describing the "Post Mortem on Cuba". On 28 October, Kennedy participated in telephone conversations with Eisenhower and fellow former US President Harry Truman. In these calls, Kennedy revealed that he thought the crisis would result in the two superpowers being "toe to toe" in Berlin by the end of the following month and expressed concern that the Soviet setback in Cuba would "make things tougher" there. He also informed his predecessors that he had rejected the public Soviet offer to withdraw from Cuba in exchange for the withdrawal of US missiles from Turkey. The US continued the blockade of Cuba. In the following days aerial reconnaissance showed that the Soviets were making progress in removing the missile systems. The 42 missiles and their support equipment were loaded onto eight Soviet ships. On 2 November 1962, Kennedy addressed the US via radio and television broadcasts concerning the dismantling of the Soviet R-12 missile bases located in the Caribbean. The ships left Cuba on November 5 to 9. The US made a final visual check as each of the ships passed the blockade line. Further diplomatic efforts were required to remove the Soviet Il-28 bombers, and they were loaded on three Soviet ships on 5 and 6 December. Concurrent with the Soviet commitment on the Il-28s, the US government announced the end of the blockade from 6:45 pm EST on 20 November 1962. At the time when the Kennedy administration believed that the Cuban Missile Crisis was resolved, nuclear tactical rockets remained in Cuba which were not part of the Kennedy-Khrushchev understanding and the Americans did not know about them. The Soviets changed their minds, fearing possible future Cuban militant steps, and on 22 November 1962, Deputy Premier of the Soviet Union Anastas Mikoyan told Castro that the rockets with the nuclear warheads were being removed as well. The Cuban Missile Crisis was solved in part by a secret agreement between John F. Kennedy and Nikita Khrushchev. The Kennedy-Khrushchev Pact was known to only nine US officials at the time of its creation in October 1962 and was first officially acknowledged at a conference in Moscow in January 1989 by Soviet Ambassador Anatoly Dobrynin and Kennedy's speechwriter Theodore Sorensen. In his negotiations with Dobrynin, Robert Kennedy informally proposed that the Jupiter missiles in Turkey would be removed "within a short time after this crisis was over". Under an operation code-named "Operation Pot Pie," the removal of the Jupiters from Italy and Turkey began on 1 April, and was completed by 24 April 1963. The initial plans were to recycle the missiles for use in other programs, but NASA and the USAF were not interested in retaining the missile hardware. The missile bodies were destroyed on site, while warheads, guidance packages, and launching equipment worth $14 million were returned to the United States. The dismantling operations were named Pot Pie I for Italy and Pot Pie II for Turkey by the United States Air Force. The outcome of the Kennedy-Khrushchev Pact was that the US would remove their rockets from Italy and Turkey and that the Soviets had no intention of resorting to nuclear war if they were out-gunned by the US. Because the withdrawal of the Jupiter missiles from NATO bases in Italy and Turkey was not made public at the time, Khrushchev appeared to have lost the conflict and become weakened. The perception was that Kennedy had won the contest between the superpowers and that Khrushchev had been humiliated. Both Kennedy and Khrushchev took every step to avoid full conflict despite pressures from their respective governments. Khrushchev held power for another two years. As a direct result of the crisis, the United States and the Soviet Union set up a direct line of communication. The hotline between the Soviet Union and the United States was a way for the President and the Premier to have negotiations should a crisis like this ever happen again. Nuclear forces. By the time of the crisis in October 1962, the total number of nuclear weapons in the stockpiles of each country numbered approximately 26,400 for the United States and 3,300 for the Soviet Union. For the US, around 3,500 (with a combined yield of approximately 6,300 megatons) would have been used in attacking the Soviet Union. The Soviets had considerably less strategic firepower at their disposal: some 300–320 bombs and warheads, without submarine-based weapons in a position to threaten the US mainland and most of their intercontinental delivery systems based on bombers that would have difficulty penetrating North American air defence systems. They had already moved 158 warheads to Cuba and between 95 and 100 would have been ready for use if the US had invaded Cuba, most of them short-range. The US had approximately 4,375 nuclear weapons deployed in Europe, most of which were tactical weapons such as nuclear artillery, with around 450 of them for ballistic missiles, cruise missiles, and aircraft; the Soviets had more than 550 similar weapons in Europe. Aftermath. Cuban leadership. Decisions on how to resolve the crisis had been made exclusively by Kennedy and Khrushchev and Cuba perceived the outcome as a betrayal by the Soviets. Castro was especially upset that certain questions of interest to Cuba, such as the status of the US Naval Base in Guantánamo, were not addressed, and Cuban–Soviet relations deteriorated. Historian Arthur Schlesinger believed that when the missiles were withdrawn, Castro was more angry with Khrushchev than with Kennedy because Khrushchev had not consulted him before making the decision. Although Castro was infuriated by Khrushchev, he had still planned to strike the US with the remaining missiles if Cuba was invaded. A few weeks after the crisis, during an interview with British communist newspaper the "Daily Worker", Guevara was still fuming over the perceived Soviet betrayal and told correspondent Sam Russell that, if the missiles had been under Cuban control they would have been launched. Guevara said later that the cause of socialist liberation from global "imperialist aggression" would have been worth the possibility of "millions of atomic war victims". The missile crisis further convinced Guevara that the world's two superpowers, the United States and the Soviet Union, were using Cuba as a pawn in their global strategies, and after this he denounced the Soviets almost as frequently as he denounced the Americans. Romanian leadership. During the crisis, Gheorghe Gheorghiu-Dej, general secretary of Romania's communist party, sent a letter to President Kennedy dissociating Romania from Soviet actions. This convinced the American administration of Bucharest's intentions of detaching itself from Moscow. Soviet leadership. The realisation that the world had come close to thermonuclear war caused Khrushchev to propose a far-reaching easing of tensions with the US. In a letter to President Kennedy dated 30 October 1962, Khrushchev suggested initiatives that were intended to prevent the possibility of another nuclear crisis. These included a non-aggression treaty between the North Atlantic Treaty Organization (NATO) and the Warsaw Pact, or even disbanding these military blocs; a treaty to cease all nuclear weapons testing and possibly eliminate all nuclear weapons; resolution of the difficult question of Germany by both sides accepting the existence of West Germany and East Germany; and US recognition of the government of mainland China. The letter invited counter-proposals and further exploration of these and other questions through peaceful negotiations. Khrushchev invited Norman Cousins, the editor of a major US periodical and an anti-nuclear weapons activist, to serve as liaison with Kennedy. Cousins met with Khrushchev for four hours in December 1962. Kennedy's response to Khrushchev's proposals was lukewarm, but he told Cousins that he felt obliged to consider them because he was under pressure from hardliners in the US national security apparatus. The United States and the Soviet Union subsequently agreed to a treaty banning atmospheric testing of nuclear weapons, known as the "Partial Nuclear Test Ban Treaty". The US and the USSR also created a communications link, the Moscow–Washington hotline, to enable the leaders of the two Cold War countries to speak directly to each other in any future crisis. These compromises embarrassed Khrushchev and the Soviet Union because the withdrawal of US missiles from Italy and Turkey had remained a secret deal between Kennedy and Khrushchev. Khrushchev went to Kennedy because he thought that the crisis was getting out of hand, but the Soviets were seen to be retreating from circumstances that they had started. Khrushchev's fall from power two years later was partly because of the Soviet Politburo's embarrassment at his eventual concessions to the US and his ineptitude in precipitating the crisis in the first place. According to Dobrynin, the top Soviet leadership took the Cuban outcome as "a blow to its prestige bordering on humiliation". US leadership. The worldwide DEFCON 3 status of US Forces was returned to DEFCON 4 on 20 November 1962. General Curtis LeMay told Kennedy that the resolution of the crisis was the "greatest defeat in our history" but his was a minority view. LeMay had pressed for an immediate invasion of Cuba as soon as the crisis began, and he still favored invading Cuba even after the Soviets had withdrawn their missiles. Twenty-five years later, LeMay still believed that "We could have gotten not only the missiles out of Cuba, we could have gotten the Communists out of Cuba at that time." By 1962, President Kennedy had faced four crisis situations: the failure of the Bay of Pigs Invasion; settlement negotiations between the pro-Western government of Laos and the Pathet Lao communist movement ("Kennedy sidestepped Laos, whose rugged terrain was no battleground for American soldiers."); the construction of the Berlin Wall; and the Cuban Missile Crisis. Kennedy believed that another failure to gain control and stop communist expansion would irreparably damage US credibility. He was determined to "draw a line in the sand" and prevent a communist victory in Vietnam. He told James Reston of "The New York Times" immediately after his Vienna summit meeting with Khrushchev, "Now we have a problem making our power credible and Vietnam looks like the place." At least four contingency strikes were armed and launched from Florida against Cuban airfields and suspected missile sites in 1963 and 1964, although all were diverted to the Pinecastle Range Complex after the planes had passed Andros island. Critics, including Seymour Melman and Seymour Hersh, suggested that the Cuban Missile Crisis had encouraged the United States to use military means, as in the later Vietnam War. Similarly, Lorraine Bayard de Volo suggested that the masculine brinksmanship of the Cuban Missile Crisis had become a "touchstone of toughness by which presidents are measured". Actions in 1962 had a significant influence on the policy decisions of future occupants of the White House, and led to foreign policy decisions such as President Lyndon B. Johnson's escalation of the war in Vietnam three years later. Human casualties. The body of U-2 pilot Anderson was returned to the US and was buried with full military honours in South Carolina. He was the first recipient of the newly created Air Force Cross, which was awarded posthumously. Although Anderson was the only combatant fatality during the crisis, 11 crew members of three reconnaissance Boeing RB-47 Stratojets of the 55th Strategic Reconnaissance Wing were also killed in crashes during the period between 27 September and 11 November 1962. Seven crew died when a Military Air Transport Service Boeing C-135B Stratolifter delivering ammunition to Guantanamo Bay Naval Base stalled and crashed on landing approach on 23 October. Later revelations. Submarine close call. What may have been the most dangerous moment in the crisis was not recognized until the Cuban Missile Crisis Havana conference in October 2002, which marked its 40th anniversary. The three-day conference was sponsored by the private National Security Archive, Brown University and the Cuban government and attended by many of the veterans of the crisis. They learned that on 27 October 1962, a group of eleven United States Navy destroyers and the aircraft carrier USS "Randolph" had located a diesel-powered, nuclear-armed Soviet Project 641 (NATO designation ) submarine, the , near Cuba. Despite being in international waters, the Americans started dropping practice depth charges to attempt to force the submarine to surface. There had been no contact from Moscow for a number of days and the submarine was running too deep to monitor radio traffic, so those on board did not know whether war had broken out. The captain of the submarine, Valentin Savitsky, had no way of knowing that the depth charges were non-lethal "practice" rounds, intended as warning shots to force him to surface. Running out of air, the Soviet submarine was surrounded by American warships and desperately needed to surface. While surfacing, the "B-59" “came under machine-gun fire from [U.S. ASW S-2] Tracker aircraft. The fire rounds landed either to the sides of the submarine’s hull or near the bow. All these provocative actions carried out by surface ships in immediate proximity, and ASW aircraft flying some 10 to 15 meters above the boat had a detrimental impact on the commander, prompting him to take extreme measures… the use of special weapons.” As firing live ammunition at a submarine was strictly prohibited, Captain Savitsky assumed that his submarine was doomed and that World War III had started. The Americans, for their part, did not know, that the "B-59" was armed with a 15-kiloton nuclear torpedo, of roughly the power of the bomb at Hiroshima. The was joined by other US destroyers who pummelled the submerged "B-59" with more explosives. Savitsky ordered the nuclear torpedo to be prepared for firing; its target was to be the USS "Randolph", the aircraft carrier leading the task force. An argument broke out in the sweltering control room of the "B-59" submarine among three senior officers, "B-59" captain Savitsky, political officer Ivan Semyonovich Maslennikov, and Deputy brigade commander Captain 2nd rank (US Navy Commander rank equivalent) Vasily Arkhipov. Accounts differ about whether Arkhipov convinced Savitsky not to make the attack or whether Savitsky himself finally concluded that the only reasonable choice left open to him was to come to the surface. The decision to launch the nuclear torpedo required the consent of all three senior officers, and of the three, Arkhipov alone refused to give his consent. Arkhipov's reputation was a key factor in the control room debate. The previous year he had exposed himself to severe radiation in order to save a submarine with an overheating nuclear reactor. During the conference October 2002, McNamara stated that nuclear war had come much closer than people had thought. Thomas Blanton, director of the National Security Archive, said, "A guy called Vasily Arkhipov saved the world." Possibility of nuclear launch. In early 1992 it was confirmed that Soviet forces in Cuba had already received tactical nuclear warheads for their artillery rockets and Il-28 bombers when the crisis broke. Castro stated that he would have recommended their use if the US had invaded, even if Cuba was destroyed. Fifty years after the crisis, Graham Allison wrote: BBC journalist Joe Matthews published the story, on 13 October 2012, after news of the 100 tactical nuclear warheads mentioned by Graham Allison in the excerpt above. Khrushchev feared that Castro's hurt pride and widespread Cuban indignation over the concessions he had made to Kennedy might lead to a breakdown of the agreement between the Soviet Union and the United States. To prevent this, Khrushchev decided to offer to give Cuba more than 100 tactical nuclear weapons that had been shipped there with the long-range missiles but, crucially, had escaped the notice of US intelligence. Khrushchev determined that because the Americans had not listed the missiles on their list of demands, keeping them in Cuba would be in the Soviet Union's interests. Anastas Mikoyan had the task of negotiating with Castro over the missile transfer deal to prevent a breakdown in relations between Cuba and the Soviet Union. While in Havana, Mikoyan witnessed the mood swings and paranoia of Castro, who was convinced that Moscow had made the agreement with the US at the expense of Cuba's defence. Mikoyan, on his own initiative, decided that Castro and his military should not under any circumstances be given control of weapons with an explosive force equal to 100 Hiroshima-sized bombs. He defused the seemingly intractable situation, which risked re-escalating the crisis, on 22 November 1962. During a tense, four-hour meeting, Mikoyan convinced Castro that despite Moscow's desire to help, it would be in breach of an unpublished Soviet law, which did not actually exist, to transfer the missiles permanently into Cuban hands and provide them with an independent nuclear deterrent. Castro was forced to give way and, much to the relief of Khrushchev and the rest of the Soviet government, the tactical nuclear weapons were crated and returned by sea to the Soviet Union during December 1962. In popular culture. The American popular media, especially television, made frequent use of the events of the missile crisis in both fictional and documentary forms. Jim Willis includes the Crisis as one of the 100 "media moments that changed America". Sheldon Stern found that a half century later there were still many "misconceptions, half-truths, and outright lies" that had shaped media versions of what happened in the White House during those two weeks. Historian William Cohn argued in a 1976 article that television programs were typically the main source used by the American public to know about and interpret the past. According to Cold War historian Andrei Kozovoi, the Soviet media proved somewhat disorganized as it was unable to generate a coherent popular history. Khrushchev lost power and was airbrushed out of the story and Cuba was no longer portrayed as a heroic David against the American Goliath. One contradiction that pervaded the Soviet media campaign was between the pacifistic rhetoric of the peace movement that emphasized the horrors of nuclear war and the militancy of the need to prepare Soviets for war against American aggression.
6828
44501925
https://en.wikipedia.org/wiki?curid=6828
Aquilegia
Aquilegia, commonly known as columbines, is a genus of perennial flowering plants in the family Ranunculaceae (buttercups). The genus includes between 80 and 400 taxa (described species and subspecies) with natural ranges across the Northern Hemisphere. Natural and introduced populations of "Aquilegia" exist on all continents but Antarctica. Known for their high physical variability and ease of hybridization, columbines are popular garden plants and have been used to create many cultivated varieties. "Aquilegia" typically possess stiff stems and leaves divide into multiple leaflets. Columbines often have colorful flowers with five sepals and five petals. The petals generally feature nectar spurs which differ in lengths between species. In North America, morphological variations in spurs evolved to suit different pollinators. Some species and varieties of columbines are naturally spurless. In cultivation, varieties bearing significantly altered physical traits such as double flowering are prevalent. Associated with fertility goddesses in ancient Greece and ancient Rome, archeological evidence suggests "Aquilegia" were in cultivation by the 2nd century AD in Roman Britain. Despite often being toxic, columbines have been used by humans as herbal remedies, perfume, and food. Asian traditional medicine, Indigenous North Americans, and Medieval Europeans have considered portions of the plants to have medicinal uses. Selective breeding and hybridization of columbines has occurred for centuries, with exchanges between Old and New World species creating further diversity. Etymology. The 1st-century AD Greek writer Dioscorides called columbines "Isopyrum", a name used presently applied to another genus. In the 12th century, the abbess and polymath Hildegard of Bingen referred to the plants as "agleya" from which the genus's name in German, "Akelei", derives. The first use of "aquilegia" with regards to columbines was in the 13th century by Albertus Magnus. In the 15th and 16th centuries, the names "Colombina", "Aquilina", and "Aquileia" came into use. With the Swedish biologist Carl Linnaeus's 1753 "Species Plantarum", the formal name for the genus became "Aquilegia", though limited use of "Aquilina" persisted in scientific usage until at least 1901. Several scientific and common names for the genus "Aquilegia" derive from its appearance. The genus name "Aquilegia" may come from the Latin word for "eagle", "aquila", in reference to the petals' resemblance to eagle talons. Another possible etymology for "Aquilegia" is a derivation from the Latin ("to collect water"), "aquilegium" (a container of water), or ("dowser" or "water-finder") in reference to the profusion of nectar in the spurs. The most common English-language name, "columbine", likely originates in the dove-like appearance of the sepals ("columba" being Latin for "dove"). There are a number of other common names for "Aquilegia" across different languages. In English, these include "granny's bonnet" for some plants in the species "Aquilegia vulgaris". In French, the word "ancolie" is the common name for "Aquilegia", while individual members of the genus have been called "gants-de-Notre-Dame" ("Our Lady's glove"), while "amor-nascoto" ("love-born") has been used in Italian. Description. "Aquilegia" is a genus of herbaceous, perennial flowering plants in the family Ranunculaceae (buttercups). The genus is highly variable in appearance. Though they are perennials, certain species are short-lived, with some exhibiting lifespans more similar to biennials and others only flourishing for six to eight years. Following a dormant period in the winter, columbines will grow foliage and have a brief flowering period. Some columbines bloom the first year after sowing, others will bloom in their second. Later, seed heads will emerge and split, sowing new seed. The foliage lives through the summer before wilting and dying going into the fall. "Aquilegia" plants grow from slim, woody rhizomes from which multiple aerial stems rise. The leaves can grow in both basal (from the base of the plant) and cauline (from the aerial stem) arrangements. Leaves emanating from closer to the plant's core are generally borne on flexible petioles, while leaves further from the core generally lack petioles. The compound leaves of "Aquilegia" are generally ternate (each leaf dividing in three leaflets), biternate (each leaf dividing into three components that in turn each bear three leaflets, for a total of nine leaflets), or triternate (each leaf divides into three components three times, for a total of 27 leaflets). The flowering stems emerge from rosettes during the spring and summer. Each inflorescence appears at the terminus of an aerial stem and can reach long. Depending the species, an inflorescence will feature one to ten of either cymes (flower clusters) or solitary flowers. Flower morphology varies across the genus, but all columbine flowers emerge from buds that are initially nodding. Flowers can be monochromatic or display multiple colors. The typical flower color for columbines is blue in shades ranging into purple and nearly black shades. Blue flowering is especially the norm in European columbines, where only "A. aurea" possesses yellow flowers. In North America, yellow and red flowers are typical, with blue and blue-purple flowers almost exclusive to high-altitude species. The American botanist Verne Grant hypothesized that light-colored "Aquilegia", especially in North America, might have adopted their shading to increase their visibility to pollinators in twilight. The perianth (non-reproductive portion) of "Aquilegia" flowers generally comprise five sepals that look like petals and five petals. Each petal typically comprises two portions: a blade, which are broad and project towards the front of the flower, and a nectar spur, a nectar-bearing structure which projects backwards. The hollow spurs attract pollinators and give columbine flowers a distinctive appearance. Depending on the species, spurs can have a hooked, horn-like appearance, with straight to coiled spurs also present in the genus. Some columbines, such as "A. ecalcarata", are naturally spurless. Recessive spurlessness individuals and populations can also be found within typically spurred species. The reproductive portions of columbine flowers comprise the stamen (male) and gynoecium (female). The stamen, which bear the anthers from which pollen emerges, form whorls of five around the gynoecium. The total number of stamen varies between species. There are generally scale-shaped staminodes between the stamen and female pistil structures. The flowers undergo three stages of anthesis: a premale stage, where the flower perianth is open but the anthers are not dehisced (split to expose pollen); the male stage where with the perianth present and the anthers dehiscenced, and a postmale stage where the anthers have withered but the perianth remains. "Aquilegia" are bisexual (featuring both male and female organs) and capable of self-pollination, through either or both autogamy (does not require assistance from pollinators) and geitonogamy (requires pollinators). Autogamy has been observed as the primary fertilization mechanism in "A. paui". "A. formosa" and "A. eximia" may exhibit adichogamy, where male and female organs do not operate simultaneously to prevent self-fertilization. Fertilization via cross pollination also occurs in "Aquilegia", with pollinators carrying pollen from one flower to the stigma of another. "Aquilegia" fruit are follicles. These follicles have a split on one side and terminate with a curling tip known as a "beak". Columbine seeds are generally obovoid with black, smooth exteriors. Columbine seeds are in a dormant state at the point of sowing. Seed germination is primarily dependent on temperature, with seeds typically requiring a multi-month period of summer temperatures followed by a multi-week to multi-month exposure to winter temperatures (vernalization) prior to germinating once temperatures warm with the arrival of spring. This prevents seedlings from emerging until there are survivable environmental conditions. The chromosome number for columbines is "2n"=14. Individual plants have been recorded with other anomalous chromosome numbers, ranging up to 2"n"=32. It is possible that B chromosomes impact the phenotype and the fertility of individual plants that possess them. Phytochemistry. Among cyanophore (organisms that produces a blue color) "Aquilegia" like "A. vulgaris", the cyanogenic glycosides compounds dhurrin and triglochinin have been observed. Cyanogenic glycosides generally taste bitter and can be toxic to animals and humans. Ingestion of of fresh "A. vulgaris" leaves by a human was observed as causing convulsions, respiratory distress, and heart failure. A child who consumed 12 "A. vulgaris" flowers experienced weakness of the limbs, cyanosis, drowsiness, and miosis; all symptoms abated after three hours. Mature seeds and roots contain toxins that, if consumed, are perilous to human heart health. The presence of the antibacterial flavonoid compound isocytisoside has been observed in "A. vulgaris". Polyphenols, primarily flavonoids, are the main component of hydroethanolic extract from "A. oxysepala". These compounds function as antioxidants. A study of "A. oxysepala" extract found it has a good scavenging effect on DPPH, superoxide anion, and hydroxyl radicals, but a poor scavenging capacity towards hydrogen peroxide. For all these, ascorbic acid has a superior scavenging effect to the extract. In flowering plants, the presence of phenylpropanoids can serve as protection from ultraviolet (UV) light and as a signaling mechanism towards pollinators. A study that examined "A. formosa" flowers determined that the petals and sepals had uniform levels of UV-resistant phenylpropanoids. Ecology. Despite its toxicity and in the absence of incentives, some animals consume the fruit and leaves of columbines. In the case of the endangered "A. paui", one study found that 30% of all fruit was lost to predation by the Southeastern Spanish ibex. Consumption by mammals is not considered a component of the "Aquilegia" reproductive cycle. In the Northeastern United States and Eastern Canada, "A. canadensis" serves as the host plant for the butterfly "Erynnis lucilius" (columbine duskywing). In two periods, the first from April to June and the second from July to September, the butterflies lay their eggs on the underside of the columbine leaves. The latterly laid brood overwinters as caterpillars in the plant litter around the columbine. In the Western United States, "Bombus occidentalis" (western bumblebee) has been observed nectar robbing from "A. coerulea" by opening or using holes cut in the spurs. Also in North America, three species of "Phytomyza" leaf miners lay their eggs on "Aquilegia": "P. aquilegivora" in the Midwest, "P. aquilegiana" in the east, and "P. columbinae" in the west. Collectively known as the columbine leaf miners, white trails or splotches on leaves indicate where the larvae consumed the tissue between the leaves' surfaces. The larvae will cut through the leaves, pupating in small puparium on the leaves' undersides. Adults pierce the leaves with their ovipositors to access liquids in the plants, leaving marks. Another "Phytomyza" columbine leaf miner, "P. ancholiae", is native to France. Originally from Europe, "Pristiphora rufipes" (columbine sawfly) is now also found in Canada and the United States. After developing from eggs laid on columbine leaves in late spring, the green larvae will eat the leaves from the outside in during their active period from April to June. In cases where many larvae are on the same plant, only the stem and flowers may go uneaten. The larvae mature within a few weeks, after which they drop from the plants and pupate in cocoons. Several fungi attack columbine foliage, including "Ascochyta aquilegiae", "Cercospora aquilegiae", and "Septoria aquilegiae". The fungus-like oomycete species "Peronospora aquilegiicola", a type of downy mildew, originated in East Asian "Aquilegia" and "Semiaquilegia" populations. It was first reported on columbines in the United Kingdom in 2013, resulting in discussion about quarantining measures to prevent its spread to Continental Europe. Pollination. Following the evolution of the genus, "Aquilegia" developed diverse floral features including varied nectar spur morphology, orientation, and coloration to attract different pollinators, contributing to speciation. The suite of floral traits that develop to attract a particular set of pollinators are collectively referred to as a pollination syndrome. "Aquilegia" flowers are traditionally divided into three pollination syndromes: bumblebees, hummingbirds, or hawkmoths, each of which are attracted by the plants' nectar. In cases where pollinators are scarce, columbines may adopt autogamy as a primary fertilization method, such as in "A. paui". Eurasian columbines are primarily pollinated by flies, bees, and bumblebees. North American columbines are generally pollinated by bees, bumblebees, hawkmoths, and hummingbirds. Pollination by hummingbirds is more typical to red-flowered North American "Aquilegia", while pale-flowered columbines may have developed to increase their visibility to hawkmoths in twilight. Nectar spur length on particular columbines is often correlated to their associated pollinators. While nectar spur length in Eurasia varies little, there is substantial variation in North American spur length. Hawkmoths often possess long tongues, permitting them to reach deeper into nectar spurs. The elongated nectar spurs on some columbines prevent hawkmoths from removing nectar from the spurs without also making contact with the reproductive organs of the flower. While hawkmoths are present in Eurasia, there are not Eurasian columbines with the hawkmoth pollination syndrome which includes longer spurs. In North America, the presence of hummingbirds which are absent in Eurasia and possess tongue lengths that are generally intermediate between other pollinators and hawkmoths may have functioned as a stepping stone that allowed North American "Aquilegia" to evolve the hawkmoth pollination syndrome. While a given population of "Aquilegia" may settle a particular habitat and develop pollination syndromes for certain pollinators, this does not necessarily translate into ecological speciation with genetic barriers between species. The likelihood of such speciation increases when floral mutations and pollinator behavioral changes coincide with isolated, small populations, as in the case of "A. micrantha" var. mancosana. Taxonomy. Within Linnaean taxonomy, "Aquilegia" was first described as a genus in Carl Linnaeus's 1753 "Species Plantarum". The genus is typically assigned to the family Ranunculaceae, though a minority of botanists have considered it a member of the family Helleboraceae. The latter placement, first made by the French botanist Jean-Louis-Auguste Loiseleur-Deslongchamps in 1819, was premised on Helleboraceae fruiting almost universally occurring with a follicle. Another historic assignment, made by the Swedish botanist in 1870, placed "Aquilegia" as the sole member of the family Aquilegiaceae. Columbines are most commonly assigned to the tribe Isopyreae, though they are sometimes placed within Aquilegieae. The placement of the tribe containing "Aquilegia" has been uncertain, with alternating assignments to two subfamilies: Thalictroideae and Isopyroideae. Regardless of the placement, "Aquilegia" forms a basal, paraphyletic group with the genera "Isopyrum" and "Thalictrum" (meadow-rues) which is characterized by their plesiomorphy (characteristics shared between clades from their last common ancestor) with Berberidaceae. When placed within the monophyletic Thalictroideae, "Aquilegia" are the second largest genus in the subfamily in terms of taxa (described species and subspecies), behind "Thalictrum". Columbines are nested in one of the three major clades in the subfamiliy, a clade it shares with "Semiaquilegia" and "Urophysa". "Semiaquilegia" and "Aquilegia" are sister genera. The broadly accepted circumscription of "Aquilegia" was established by the American botanist Philip A. Munz in his 1946 monograph "Aquilegia: The Cultivated and Wild Columbines". The only element of Munz's circumscription which has been substantially contended is his inclusion of the spurless Asian species "A. ecalcarata", which is sometimes instead segregated into the closely related genus of spurless-flowered "Semiaquilegia"; "Semiaquilegia ecalcarata" remains the species's common name in cultivation. Another spurless columbine, "A. micrantha" var. "mancosana", was also once reassigned to "Semiaquilegia". Reassignments to "Isopyrum" and "Paraquilegia", such as "P. anemonoides" in 1920, have been more permanent. Evolution. There are no good fossils of columbines and other Thalictroideae that indicate how they evolved and radiated. Genetic evidence suggests that the last common ancestor among Thalictroideae lived in East Asia approximately 36 million years ago, during the late Eocene. A 2018 study of genetic evidence indicated that "Aquilegia" first appeared during the Upper Miocene approximately 6.9 million years ago. The genus split into two clades 4.8 million years ago, with one clade populating North America and the other radiating across Eurasia. A 2024 study found found that the divergence between "Urophysa", "Semiaquilegia", and "Aquilegia" instead occurred over a relatively short 1 million-year-long period approximately 8 to 9 million years ago. The species is thought to have originated in the mountainous portions of south-central Siberia. Studies of "Aquilegia" genetics indicated that North American "Aquilegia" species shared their last common ancestor with species from the Asian Far East between 3.84 and 2.99 million years ago. This analysis corresponded with the theory that "Aquilegia" reached North America via a land bridge over the Bering Strait. While there were several periods after this date range where the Beringian land bridge connected Asia and North America, these occurred when climatic conditions would have prevented "Aquilegia" migration through the region. Genetic information suggests that the diversification rates of columbines rapidly increased about 3 million years ago, with indications of two independent radiation events occurring around that time: one in North American columbine populations and the other in European populations. Despite the rapid evolution of substantial physical differences across species, genetic divergence remains minimal. This, combined with the presence of relatively few physiological barriers to hybridization, has resulted in columbines displaying exceptionally high degrees of interfertility. Among Asian and European columbines, differences in floral morphology and pollinators are lower between species than between North American species. However, there are approximately the same number of "Aquilegia" species across the three continents. This suggests that pollinator specialization played a dominant role in North American columbine speciation while habitat specialization was the primary driver of Asian and European columbine speciation. The nectar spurs present in "Aquilegia" are an unusual evolutionary trait, arising on the ancestor of all "Aquilegia" up to approximately 7 million years ago. In order to determine the gene responsible for the trait, a 2020 paper compared spurred "Aquilegia" taxa against the spurless "A. ecalcarata". This research identified a gene named "POPOVICH" ("POP") as responsible for cell proliferation during the early stage of spur development. "POP", which encodes a C2H2 zinc-finger transcription factor, appeared at higher levels in the pedals of the spurred "Aquilegia" studied than in "A. ecalcarata". Current species. According to different taxonomic authorities, the genus "Aquilegia" comprises between 70 and over 400 taxa. Some totals correspond more closely with Munz's 1946 total of 67, while online Tropicos and the International Plant Names Index have accepted over 200 and 500, respectively. , the Royal Botanic Gardens, Kew's Plants of the World Online accepts 130 species. The American botanist and gardener Robert Nold attributed the substantial total of named species, subspecies, and varieties to the 19th-century practice of assigning names to even minutely distinct specimens. However, Nold also held that overly broad species could increase the number of varietal names. The Italian botanist Enio Nardi stated that authors assessing "Aquilegia" as containing fewer than 100 species "either mask or underestimate their splitting into subspecies, many of which were originally described at the species level" and remain accepted as species in taxonomic indices. The type species of the genus is "A. vulgaris", a European columbine with high levels of physical variability. Most European "Aquilegia" are morphologically similar to "A. vulgaris", sometimes to the point where visually them discerning from "A. vulgaris" is difficult. "A. vulgaris" is sometimes considered to encompass Iberian and North African columbines that are not accepted as separate species for reasons that Nardi said were founded in "tradition, more cultural than scientific". Natural hybridization. A lack of genetic and physiological barriers permits columbine hybridization across even distantly related species with high degrees of morphological and ecological differences. In natural settings, hybrid columbines may occur wherever the natural ranges of multiple species come into contact. While artificial pollination has determined the extent of the genus's interfertility, breeding between plants within the same species is generally more common even in settings both natural and cultivated where multiple columbine species are present. A significant barrier to hybridization occurring naturally is the proclivity of pollinators to preferentially support infraspecific crossbreeding due to the pollinators' recognition of familiar flower typology. In North America, species with flowers adapted to hummingbird and hawkmoth pollination have far reduced natural hybridization with species that do not share these adaptations. Still, hybridization and subsequent introgression occurs in North American columbines. Such hybridization across columbines with two different pollination syndromes can be driven by a third pollinator that do not show favoritism towards a particular pollination syndrome. In the instance of populations of hybrids between the yellow-flowered "A. flavescens" and red-flowered "A. formosa" in the northwestern United States, the resultant pink-flowering columbines were initially described as an "A. flavescens" variety and are now accepted as "Aquilegia" × "miniana". In China, clades distinguishing eastern and western "A. ecalcarata" populations indicate gene flow from different species. A study using genetic modeling indicated that the spurless "A. ecalcarata" may have developed from two separate mutations from discrete eastern and western populations of the spurred "A. kubanica", an instance of a hybrid parallel evolution. Further hybridization between "A. ecalcarata" and spurred columbines that share its range is limited by each species's selection for particular pollinators. However, a short-spurred "A. rockii" phenotype has developed from hybridization with western "A. ecalcarata". Distribution. "Aquilegia" species have natural ranges which span the Northern Hemisphere in Eurasia and North America. These ranges encompass the Circumboreal Region, the geographically largest floristic region in the world. The southern limits of the natural "Aquilegia" ranges are found in northern African and northern Mexico, with the only native African columbine being the "A. ballii" of the Atlas Mountains. "A. vulgaris", a European columbine which possibly originated in the Balkans, has spread through both natural radiation and human assistance to become the most widely distributed "Aquilegia" species. Its range has expanded to include introduced populations that have sometimes become naturalized in Africa, Macaronesia, the Americas, and Oceania. The species is also present in Asia, with populations in the Russian Far East and Uzbekistan. These introduced "A. vulgaris" populations typically originated from ornamental cultivation. Some columbines are narrowly endemic, with highly restricted ranges. "A. paui" only has a single population with four subpopulations within a few kilometers of each other in the mountains of Ports de Tortosa-Beseit, Catalonia. "A. hinckleyana" only populates a single location: the basin of Capote Falls, a waterfall in Texas. , the entire population of "A. nuragica" estimated as 10 to 15 individuals populated an area of approximately on Sardinia. Conservation. Certain "Aquilegia" have been identified as having elevated risks of extinction, with some appearing on the IUCN Red List. Two Sardinian columbines, "A. barbaricina" and "A. nuragica", have conservation statuses assessed by the IUCN as critically endangered and the same organization listed the species in their Top 50 Mediterranean Island Plants campaign for conservation. Some columbines, including both rare and common taxa, are the subject governmental regulations. Humans pose a significant threat of impairing columbine population health and driving extinction. Beyond the desirability of the flowers for display, uncommon or rare "Aquilegia" face the risk of destruction by botanists and others seeking to add them to their herbariums or private collections. Cultivation. In Europe, cultivation of columbines may have begun over 1700 years ago. Archaeobotanical evidence suggests that "A. vulgaris" was cultivated for ornamental purposes in 3rd-century AD Roman Britain. The discoveries of singular "A. vulgaris" seeds in burnt waste pits at Alcester and Leicester have been interpreted as evidence of their planting in gardens. Finds of columbines at a late Saxon site near Winchester Cathedral and three later medieval German sites have also been interpreted as using the plant for gardening. In 12th-century Italy, people may have supported "A. vulgaris" or "A. atrata" populations near religious structures, possibly due to the contemporary treatment of columbines as Christian symbols. Lifespans for cultivated columbines are generally short for perennials, with a plant's peak typically occurring in its second year. Two- to three-year-long lives are typical in cultivated "A. coerulea" and "A. glandulosa", with "A. vulgaris" exhibiting a binnenial-like lifespan. Conversely, "A. chrysantha" and "A. desertorum" are particularly long-lived. In gardens, columbines will generally live three to four years. This lifespan can be extended by deadheading, where dead flowers are removed prior to the plant expending the energy needed to produce fruit. In cultivation, the seasonal cycle that releases columbine seeds from dormancy can be replicated via a stratification where seeds are exposed to two to four weeks of cool temperatures prior to sowing. Cultivated "Aquilegia" typically require well-draining soil. Improperly drained soil can result in the development of root rot, caused by either bacteria or fungi. At the end of the growth season, columbines can be protected from frost heaving by having their dead foliage removed to near the soil level and mulching once the ground is frozen. Vernalization a process by which juvenile plants are exposed to a weeks-long period of cold which mimics seasonal weather can accelerate the rate at which columbines reach flowering. If permitted, cultivated columbines drop numerous seeds around themselves, resulting in a rapid proliferation of seedlings. These seedlings can give the impression that short-lived plants are living longer. Due to their tendency towards hybridization and particularly in the case of F1 hybrids and cultivars (cultivated varieties)inherent genetic diversity, the seeds of cultivated "Aquilegia" often do not produce plants true to their type. Several animals are considered pests of cultivated columbines. Columbine leaf miners of the "Phytomyza" genus leave white patches or paths on leaves, but the damage is only cosmetic and does not generally require chemical pesticides. The moths "Papaipema lysimachiae" and "P. nebris" (stalk borer) both adversely affect columbines; scraping the ground around impacted plants can destroy the moths' seeds. The larval stage of the "Erynnis lucilius" (columbine duskywing) is known as the "columbine skipper", and the larvae can chew leaves and bind them together with silk. Aphid infestation is another frequent issue, requiring rapid intervention to prevent significant destruction. Cultivars and cultivated hybrids. Columbine cultivars are popular among gardeners, particularly in the Northern Hemisphere. Artificial hybridization efforts have determined that the degree of interfertility of columbines is not identical across species. While North American columbines easily hybridize with each other and most Eurasian "Aquilegia", the Asian species "A. oxysepala" and "A viridiflora" resist hybridization with North American columbines. The single-flowering "A. vulgaris" cultivar 'Nivea' (also known as 'Munstead White') received the Royal Horticultural Society's Award of Garden Merit. Double-flowered columbines were developed from "A. vulgaris" and can be classified into three types. The Flore Pleno group, described in the English herbalist John Gerard's 1597 book "Herball", possesses plants where the flowers are elongated and the petals are rounded. The Veraeneana group come in several colors of flower and possess marbled green and gold foliage. The Stellata group, described in the English botanist John Parkinson's 1629 book "Paradisi in Sole Paradisus Terrestris", has flowers which are star-shaped and have pointed petals. The three-colored, double-flowered cultivar 'Nora Barlow' first discovered by the botanist and geneticist Nora Barlow is sometimes classified as part of the Stellata group, but displays a greater quantity of particularly narrow sepals than other members of that group. Human uses. Medicinal and herbal. Asian traditional medicine, Indigenous North Americans, and Medieval Europeans have considered columbines plants medicinal herbs. Modern scientific research has determined that columbines can possess antioxidant, antibacterial, and anti-cancer qualities. In China, "A. oxysepala" has been used as a dietary supplement and medicine for thousands of years. "A. oxysepala" has been used there to treat diseases in women such as irregular menstruation and intermenstrual bleeding. While its extract's function as an antioxidant is known, with its medicinal use possibly attributable to the extract's good scavenging of superoxide anion radicals, it is inferior to the common dietary supplement ascorbic acid. Research has also determined "A. oxysepala" to possess antibacterial qualities. "A. sibirica" has been a significant part of Asian traditional medicine, including traditional Mongolian medicine, and the plant has been used to treat diseases in women, asthma, rheumatism, and cardiovascular diseases. It was also known to inhibit "Staphylococcus aureus", one of the bacteria responsible for staphylococcal infections. "A. sibirica" also possesses antifungal qualities. Extracts showed the presence of chlorogenic acid and caffeic acid. Extractions performed with heat and methanol extracted more of the medically relevant compounds than those performed at room temperature or with other solvents. Some Indigenous North American peoples used the roots of columbines to treat ulcers. North American peoples have used "A. canadensis" and "A. chaplinei" as an aphrodisiac. Crushed "A. canadensis" seeds were used as a perfume, and the plant was thought to be capable of detecting bewitchment. The Goshute people reportedly chewed "A. coerulea" seeds or utilized the plant's root for medicinal or therapeutic purposes. Other uses. Prior to deaths due to overdoses, small quantities of flowers from several columbines species were considered safe for human consumption and were regularly eaten as colorful garnishes and parts of salads. Several Indigenous North American peoples have been described as eating "A. formosa": the Miwok may have boiled and eaten them with early spring greens, while Hanaksiala and Chehalis children may have sucked nectar from the flowers. Columbine flowers are described as sweet, a flavor attributed to their nectar. Verne Grant repeatedly utilized "Aquilegia" in research published between the 1950s and the 1990s to explain the role that hybridization, polyploidy, and other processes played in how plant evolution and speciation occur. Among Grant's works that utilized "Aquilegia" to illustrate evolutionary patterns and processes was his influential 1971 book "Plant Speciation". The five species groups that Grant proposed in 1952 remains a foundational element for a phylogenetic understanding of columbines. In 21st-century scientific research of plant development, ecology, and evolution, "Aquilegia" has been considered a model system. Utilizing the genome sequence of "A. coerulea", a study examined polyploidy during the evolution of eudicots, a clade in which columbines are considered a basal member. This research determined that columbines and all eudicots experienced a shared tetraploidy, but that only core members of the eudicots clade (which excludes columbines) experienced a shared hexaploidy. In culture. European columbines have been assigned several meanings since the ancient period. Within art, "A. vulgaris" has been a symbol of both moral and immoral behaviors, as well as an ornamental motif. In ancient Greece and ancient Rome, the spurs of columbines were interpreted as phallic and the plants were associated with the fertility goddesses Aphrodite and Venus. For several centuries, columbines were viewed as symbols of cuckoldry. In English literature, columbines have been mentioned with negative connotations. In William Shakespeare's Elizabethan drama "Hamlet", the character Ophelia presents King Claudius with flowers that include columbines, where the species is symbolic of deception and serves as an omen of death. Medieval European artists associated the columbines with Christian sacredness and sublimity, with Flemish painters of the 15th century frequently depicting them in prominent locations within their Christian artworks. In "The Garden of Earthly Delights" (1503–1504) by Hieronymus Bosch, "A. vulgaris" serves as a symbol for bodily pleasures. "Portrait of a Princess" (1435–1449) by Pisanello depicts multiple "A. atrata" at different angles as part of the floral ornamentation that makes that painting characteristic of the international Gothic style. Columbines have several meanings in the language of flowers, a manner of communicating using floral displays. The 1867 English book "The Illustrated Language of Flowers" by a "Mrs. L. Burke", columbines are generally described as communicating "folly". The same book identifies purple columbines with "resolve to win" and red columbines with "anxious and trembling". Columbines, due to their resemblance to doves, have been associated with the Holy Spirit in Christianity since at least the 15th century. "A. coerulea" is the state flower of Colorado. The Colorado General Assembly passed legislation in 1925 making it illegal to uproot "A. coerulea" on public lands. The law also limits on how many buds, blossoms, and stems may be picked from the species by a person on public lands. It was used in the heraldry of the former city of Scarborough in the Canadian province of Ontario. The asteroid 1063 Aquilegia was named for the genus by the German astronomer Karl Reinmuth. He submitted a list of 66 newly named asteroids in the early 1930s, including a sequence of 28 asteroids that were all named after plants, in particular flowering plants.
6829
461300
https://en.wikipedia.org/wiki?curid=6829
Cache (computing)
In computing, a cache ( ) is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs. To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested. Motivation. In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causing propagation delays. There is also a tradeoff between high-performance technologies such as SRAM and cheaper, easily mass-produced commodities such as DRAM, flash, or hard disks. The buffering provided by a cache benefits one or both of latency and throughput (bandwidth). A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the cache. Prediction or explicit prefetching can be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether. The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus. Operation. Hardware implements cache as a block of memory for temporary storage of data likely to be used again. Central processing units (CPUs), solid-state drives (SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, while web browsers and web servers commonly rely on software caching. A cache is made up of a pool of entries. Each entry has associated "data", which is a copy of the same data in some "backing store". Each entry also has a "tag", which specifies the identity of the data in the backing store of which the entry is a copy. When the cache client (a CPU, web browser, operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache. The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as a cache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access. During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. The heuristic used to select the entry to replace is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries. Write policies. Cache writes must eventually be propagated to the backing store. The timing for this is governed by the "write policy". The two primary write policies are: A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them as "dirty" for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as a "lazy write". For this reason, a read miss in a write-back cache may require two memory accesses to the backing store: one to write back the dirty data, and one to retrieve the requested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Write operations do not return data. Consequently, a decision needs to be made for write misses: whether or not to load the data into the cache. This is determined by these "write-miss policies": While both write policies can Implement either write-miss policy, they are typically paired as follows: Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or "stale". Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with cache coherence. Prefetch. On a cache read miss, caches with a "demand paging policy" read the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache. Caches with a prefetch input queue or more general "anticipatory paging policy" go further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such as disk storage and DRAM. A few operating systems go further with a loader that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the page cache associated with a prefetcher or the web cache associated with link prefetching. Examples of hardware caches. CPU cache. Small memories on or close to the CPU can operate faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, sometimes in cascaded levels; modern high-end embedded, desktop and server microprocessors may have as many as six types of cache (between levels and functions). Some examples of caches with a specific function are the D-cache, I-cache and the translation lookaside buffer for the memory management unit (MMU). GPU cache. Earlier graphics processing units (GPUs) often had limited read-only texture caches and used swizzling to improve 2D locality of reference. Cache misses would drastically affect performance, e.g. if mipmapping was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel. As GPUs advanced, supporting general-purpose computing on graphics processing units and compute kernels, they have developed progressively larger and increasingly general caches, including instruction caches for shaders, exhibiting functionality commonly found in CPU caches. These caches have grown to handle synchronization primitives between threads and atomic operations, and interface with a CPU-style MMU. DSPs. Digital signal processors have similarly generalized over the years. Earlier designs used scratchpad memory fed by direct memory access, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache). Translation lookaside buffer. A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of virtual address to physical address translations. This specialized cache is called a translation lookaside buffer (TLB). In-network cache. Information-centric networking. Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions. Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed. Policies. Time aware least recently used. The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN, content delivery networks (CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally-defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and short-lived content should be replaced with incoming content. Least frequent recently used. The least frequent recently used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for network cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be seen as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done by first evicting content from the unprivileged partition, then pushing content from the privileged partition to the unprivileged partition, and finally inserting new content into the privileged partition. In the above procedure, the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition. The basic idea is to cache the locally popular content with the ALFU scheme and push the popular content to the privileged partition. Weather forecast. In 2011, the use of smartphones with weather forecasting options was overly taxing AccuWeather servers; two requests from the same area would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from a nearby query would be used. The number of to-the-server lookups per day dropped by half. Software caches. Disk cache. While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory is managed by the operating system kernel. While the disk buffer, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to as "disk cache", its main functions are write sequencing and read prefetching. High-end disk controllers often have their own on-board cache for the hard disk drive's data blocks. Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local tape drives or optical jukeboxes; such a scheme is the main concept of hierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as hybrid drives. Web cache. Web browsers and web proxy servers, either locally or at the Internet service provider (ISP), employ web caches to store previous responses from web servers, such as web pages and images. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web. Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli. Memoization. A cache can store data that is computed on demand rather than retrieved from a backing store. Memoization is an optimization technique that stores the results of resource-consuming function calls within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the dynamic programming algorithm design methodology, which can also be thought of as a means of caching. Content delivery network. A content delivery network (CDN) is a network of distributed servers that deliver pages and other web content to a user, based on the geographic locations of the user, the origin of the web page and the content delivery server. CDNs were introduced in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache. Cloud storage gateway. A cloud storage gateway is a hybrid cloud storage device that connects a local network to one or more cloud storage services, typically object storage services such as Amazon S3. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages. Other caches. The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a DNS resolver library. Write-through operation is common when operating over unreliable networks, because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side caches for distributed file systems (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable. Web search engines also frequently make web pages they have indexed available from their cache. This can prove useful when web pages from a web server are temporarily or permanently inaccessible. Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data. A distributed cache uses networked hosts to provide scalability, reliability and performance to the application. The hosts can be co-located or spread over different geographical regions. Buffer vs. cache. The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering. Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system. With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand, With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching. A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once. A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible from the perspective of neighboring layers.
6830
53396
https://en.wikipedia.org/wiki?curid=6830
Columbus, Indiana
Columbus () is a city in and the county seat of Bartholomew County, Indiana, United States. The population was 50,474 at the 2020 census. The city is known for its architectural significance, having commissioned noted works of modern architecture and public art since the mid-20th century; the annual program Exhibit Columbus celebrates this legacy. Located about south of Indianapolis, on the east fork of the White River, it is the state's 20th-largest city. It is the principal city of the Columbus, Indiana metropolitan statistical area, which encompasses all of Bartholomew County. Columbus is the birthplace of former Indiana Governor and former Vice President of the United States, Mike Pence. Columbus is the headquarters of the engine company Cummins. In 2004 the city was named as one of "The Ten Most Playful Towns" by "Nick Jr. Family Magazine". In the July 2005 edition of "GQ" magazine, Columbus was named as one of the "62 Reasons to Love Your Country". Columbus won the national contest "America in Bloom" in 2006, and in late 2008, "National Geographic Traveler" ranked Columbus 11th on its historic destinations list, describing the city as "authentic, unique, and unspoiled." History. The land developed as Columbus was bought by General John Tipton and Luke Bonesteel in 1820. Tipton built a log cabin on Mount Tipton, a small hill overlooking White River and the surrounding flat, heavily forested and swampy valley. It held wetlands of the river. The town was first known as Tiptona, named in honor of Tipton. The town's name was changed to Columbus on March 20, 1821. Many people believe Tipton was upset by the name change, but no evidence exists to prove this. Nonetheless, he decided to leave the newly founded town and did not return. Tipton was later appointed as the highway commissioner for the State of Indiana and was assigned to building a highway from Indianapolis, Indiana to Louisville, Kentucky. When the road approached Columbus, Tipton constructed the first bypass road ever built; it detoured south around the west side of Columbus en route to Seymour. Joseph McKinney was the first to plot the town of Columbus, but no date was recorded. Local history books for years said that the land on which Columbus sits was donated by Tipton. But in 2003, Historic Columbus Indiana acquired a deed showing that Tipton had sold the land. A ferry was established below the confluence of the Flatrock and Driftwood rivers, which form the White River. A village of three or four log cabins developed around the ferry landing, and a store was added in 1821. Later that year, Bartholomew County was organized by an act of the State Legislature and named to honor the famous Hoosier militiaman, General Joseph Bartholomew. Columbus was incorporated as a town on June 28, 1864, and was incorporated as a city 1921. The first railroad in Indiana was constructed to Columbus from Madison, Indiana in 1844. This eventually became the Madison branch of the Pennsylvania Railroad. The railroad fostered the growth of the community into one of the largest in Indiana, and three more railroads reached the city by 1850. The Crump Theatre in Columbus, built in 1889 by John Crump, is the oldest theater in Indiana. Today the building is included within the Columbus Historic District. Before it closed permanently in 2010, it was an all-ages venue with occasional musical performances. The Cummins Bookstore began operations in the city in 1892. Until late 2007, when it closed, it was the oldest continually operated bookstore in Indiana. The Irwin Union Bank building was built in 1954. It was designated as a National Historic Landmark by the National Park Service in 2001 in recognition of its unique architecture. The building consists of a one-story bank structure adjacent to a three-story office annex. A portion of the office annex was built along with the banking hall in 1954. The remaining larger portion, designed by Kevin Roche John Dinkeloo and Associates, was built in 1973. Eero Saarinen designed the bank building with its glazed hall to be set off against the blank background of its three-story brick annex. Two steel and glass vestibule connectors lead from the north side of this structure to the annex. The building was designed to distance the Irwin Union Bank from traditional banking architecture, which mostly echoed imposing, neoclassical style buildings of brick or stone. Tellers were behind iron bars and removed from their customers. Saarinen worked to develop a building that would welcome customers rather than intimidate them. Economy. Columbus has been home to many manufacturing companies, including Noblitt-Sparks Industries, which built radios under the Arvin brand in the 1930s, and Arvin Industries, now Meritor After merging with Meritor Automotive on July 10, 2000, the headquarters of the newly created ArvinMeritor Industries was established in Troy, Michigan, the home of parent company, Rockwell International. It was announced in February 2011 that the company name would revert to Meritor, Inc. Cummins is by far the region's largest employer, and the Infotech Park in Columbus accounts for a sizable number of research jobs in the city itself. Just south of Columbus are the North American headquarters of Toyota Material Handling, the world's largest material handling (forklift) manufacturer. Other notable industries include architecture, a discipline for which Columbus is famous worldwide. The late Joseph Irwin Miller (then president and chairman of Cummins) launched the Cummins Foundation, a charitable program that helps subsidize a large number of architectural projects throughout the city by up-and-coming engineers and architects. Early in the 20th century, Columbus also was home to a number of pioneering car manufacturers, including Reeves, which produced the unusual four-axle Octoauto and the twin rear-axle Sextoauto, both around 1911. Geography. The Driftwood and Flatrock Rivers converge at Columbus to form the East Fork of the White River. According to the 2010 census, Columbus has a total area of , of which (or 98.62%) is land and (or 1.38%) is water. Demographics. 2010 census. As of the census of 2010, there were 44,061 people, 17,787 households, and 11,506 families residing in the city. The population density was . There were 19,700 housing units at an average density of . The racial makeup of the city was 86.9% White, 2.7% African American, 0.2% Native American, 5.6% Asian, 0.1% Pacific Islander, 2.5% from other races, and 2.0% from two or more races. Hispanic or Latino of any race were 5.8% of the population. There were 17,787 households, of which 33.5% had children under the age of 18 living with them, 48.5% were married couples living together, 11.7% had a female householder with no husband present, 4.5% had a male householder with no wife present, and 35.3% were non-families. 29.7% of all households were made up of individuals, and 11.5% had someone living alone who was 65 years of age or older. The average household size was 2.43 and the average family size was 3.00. The median age in the city was 37.1 years. 25.2% of residents were under the age of 18; 8.1% were between the ages of 18 and 24; 27.3% were from 25 to 44; 24.9% were from 45 to 64; and 14.4% were 65 years of age or older. The gender makeup of the city was 48.4% male and 51.6% female. 2000 census. As of the census of 2000, there were 39,059 people, 15,985 households, and 10,566 families residing in the city. The population density was . There were 17,162 housing units at an average density of . The racial makeup of the city was 91.32% White, 2.71% Black or African American, 0.13% Native American, 3.23% Asian, 0.05% Pacific Islander, 1.39% from other races, and 1.19% from two or more races. 2.81% of the population were Hispanic or Latino of any race. There were 15,985 households, out of which 31.8% had children under the age of 18 living with them, 51.9% were married couples living together, 11.0% had a female householder with no husband present, and 33.9% were non-families. 29.1% of all households were composed of individuals, and 10.7% had someone living alone who was 65 years of age or older. The average household size was 2.39, and the average family size was 2.94. In the city, the population was spread out, with 25.7% under the age of 18, 8.0% from 18 to 24 years, 29.5% from 25 to 44 years, 23.0% from 45 to 64 years, and 13.7% over the age of 65. The median age was 36 years. There were 92.8 males for every 100 females and 89.6 males for every 100 females over age 18. The median income for a household in the city was $41,723, and the median income for a family was $52,296. Males had a median income of $40,367 versus $24,446 for females, and the per capita income was $22,055. About 6.5% of families and 8.1% of the population were below the poverty line, including 9.7% of those under age 18 and 8.8% of those age 65 or over. Arts and culture. Columbus is a city known for its modern architecture and public art. J. Irwin Miller, 2nd CEO and a nephew of a co-founder of Cummins, the Columbus-headquartered diesel engine manufacturer, instituted a program in which the Cummins Foundation paid the architects' fees, provided the client selected a firm from a list compiled by the foundation. The plan was initiated with public schools and was so successful that the foundation decided to offer such design support to other non-profit and civic organizations. The high number of notable public buildings and public art in the Columbus area, designed by such individuals as Eero Saarinen, I. M. Pei, Robert Venturi, César Pelli, and Richard Meier, led to Columbus earning the nickname "Athens on the Prairie." Seven buildings, constructed between 1942 and 1965, are National Historic Landmarks, and approximately 60 other buildings sustain the Bartholomew County seat's reputation as a showcase of modern architecture. National Public Radio once devoted an article to the town's architecture. In 2015, Landmark Columbus was created as a program of Heritage Fund - The Community Foundation of Bartholomew county. In addition to the Columbus Historic District and Irwin Union Bank, the city has numerous buildings listed on the National Register of Historic Places, including seven National Historic Landmarks of modernist architecture: Bartholomew County Courthouse, Columbus City Hall, First Baptist Church, First Christian Church, Haw Creek Leather Company, Mabel McDowell Elementary School, McEwen-Samuels-Marr House, McKinley School, Miller House, North Christian Church, and The Republic Newspaper Office. The city is the basis for the 2017 film "Columbus" by independent filmmaker Kogonada. The film was shot on location in Columbus over 18 days in the summer of 2016. Exhibit Columbus. In May 2016, Landmark Columbus launched Exhibit Columbus as a way to continue the ambitious traditions of the past into the future. Exhibit Columbus features annual programming that alternates between symposium and exhibition years. Sports. Columbus High School was home to footwear pioneer Chuck Taylor, who played basketball in Columbus before setting out to promote his now famous shoes and the sport of basketball before being inducted into the Naismith Memorial Basketball Hall of Fame. Two local high schools compete within the state in various sports. Columbus North and Columbus East both have competitive athletics and have many notable athletes that go on to compete in college and beyond. Columbus North High School houses one of the largest high school gyms in the United States. CNHS vs CEHS Indiana Diesels of the Premier Basketball League play their home games at the gymnasium at Ceraland Park, with plans to move to a proposed downtown sports complex in the near future. Similarly, the Indiana Sentinels of the Federal Prospects Hockey League play their home games at Hamilton Community Center & Ice Arena with plans to move to a newer, larger arena by 2029. Parks and recreation. Columbus boasts over of parks and green space and over 20 miles of People Trails. These amenities, in addition to several athletic and community facilities, including Donner Aquatic Center, Lincoln Park Softball Complex, Hamilton Center Ice Arena, Clifty Park, Foundation for Youth/Columbus Gymnastics Center and The Commons, are managed and maintained by the Columbus Parks and Recreation Department. Transportation. Transit. ColumBUS provides bus service in the city with five routes operating Monday through Saturday. Roads and highways. The north–south U.S. Route 31 has been diverted to the northeastern part of the city. Interstate 65 bypasses Columbus to the west. Indiana Route 46 runs-east-west through the southern section of the city. Railroads. Freight rail service is provided by the Louisville and Indiana Railroad (LIRC). The LIRC line runs in a north–south orientation along the western edge of Columbus. The Pennsylvania Railroad's "Kentuckyian" (Chicago-Louisville) made stops in the city until 1968. The PRR and its successor, the Penn Central, ran the Florida-bound "South Wind" up to 1971. The city has been earmarked as a location for a new Amtrak station along the Chicago-Indianapolis-Louisville rail corridor. Airport. Columbus is served by the Columbus Municipal Airport (KBAK). It is located approximately north of Columbus. The airport handles approximately 40,500 operations per year, with roughly 87% general aviation, 4% air taxi, 8% military and less than 1% commercial service. The airport has two concrete runways; a 6,401-foot runway with approved ILS and GPS approaches (Runway 5-23) and a 5,001-foot crosswind runway, also with GPS approaches, (Runway 14-32). The nearest commercial airport which currently has scheduled airline service is Indianapolis International Airport (IND), located approximately northwest of Columbus. Louisville Muhammad Ali International Airport and Cincinnati/Northern Kentucky International Airport are to the south and to the southeast, respectively. Notable people. This is a list of notable people who were born in, or who currently live, or have lived in Columbus. Education. The Bartholomew Consolidated School Corporation (BCSC) is the local school district. High schools include: Columbus has a public library, a branch of the Bartholomew County Public Library. Secondary education includes Indiana University Columbus (IU Columbus), an Ivy Tech campus, a Purdue Polytechnic campus, and an Indiana Wesleyan University education center.
6834
49629056
https://en.wikipedia.org/wiki?curid=6834
List of computer scientists
This is a list of computer scientists, people who do work in computer science, in particular researchers and authors. Some persons notable as programmers are included here because they work in research as well as program. A few of these people pre-date the invention of the digital computer; they are now regarded as computer scientists because their work can be seen as leading to the invention of the computer. Others are mathematicians whose work falls within what would now be called theoretical computer science, such as complexity theory and algorithmic information theory.
6839
5718152
https://en.wikipedia.org/wiki?curid=6839
Reaction kinetics in uniform supersonic flow
Reaction kinetics in uniform supersonic flow (, CRESU) is an experiment investigating chemical reactions taking place at very low temperatures. The technique involves the expansion of a gas or mixture of gases through a de Laval nozzle from a high-pressure reservoir into a vacuum chamber. As it expands, the nozzle collimates the gas into a uniform supersonic beam, which is essentially collision-free and has a temperature that, in the centre-of-mass frame, can be significantly below that of the reservoir gas. Each nozzle produces a characteristic temperature. This way, any temperature between room temperature and about 10 K can be achieved. Apparatus. There are relatively few CRESU apparatuses in existence for the simple reason that the gas throughput and pumping requirements are huge, which makes them expensive to run. Two of the leading centres have been the University of Rennes (France) and the University of Birmingham (UK). A more recent development has been a pulsed version of the CRESU, which requires far less gas and therefore smaller pumps. Kinetics. Most species have a negligible vapour pressure at such low temperatures, and this means that they quickly condense on the sides of the apparatus. Essentially, the CRESU technique provides a "wall-less flow tube", which allows the kinetics of gas-phase reactions to be investigated at much lower temperatures than otherwise possible. Chemical kinetics experiments can then be carried out in a pump–probe fashion, using a laser to initiate the reaction (for example, by preparing one of the reagents by photolysis of a precursor), followed by observation of that same species (for example, by laser-induced fluorescence) after a known time delay. The fluorescence signal is captured by a photomultiplier a known distance downstream of the de Laval nozzle. The time delay can be varied up to the maximum corresponding to the flow time over that known distance. By studying how quickly the reagent species disappears in the presence of differing concentrations of a (usually stable) co-reagent species, the reaction rate constant at the low temperature of the CRESU flow can be determined. Reactions studied by the CRESU technique typically have no significant activation energy barrier. In the case of neutral–neutral reactions (i.e., not involving any charged species, ions), these types of barrier-free reactions usually involve free radical species, such as molecular oxygen (O2), the cyanide radical (CN) or the hydroxyl radical (OH). The energetic driving force for these reactions is typically an attractive long-range intermolecular potential. CRESU experiments have been used to show deviations from Arrhenius kinetics at low temperatures: as the temperature is reduced, the rate constant actually increases. They can explain why chemistry is so prevalent in the interstellar medium, where many different polyatomic species have been detected (by radio astronomy).
6840
7957594
https://en.wikipedia.org/wiki?curid=6840
Cygwin
Cygwin ( ) is a free and open-source Unix-like environment and command-line interface (CLI) for Microsoft Windows. The project also provides a software repository containing open-source packages. Cygwin allows source code for Unix-like operating systems to be compiled and run on Windows. Cygwin provides native integration of Windows-based applications. The terminal emulator mintty is the default command-line interface provided to interact with the environment. The Cygwin installation directory layout mimics the root file system of Unix-like systems, with directories such as codice_1, codice_2, codice_3, codice_4, and codice_5. Cygwin is released under the GNU Lesser General Public License version 3. It was originally developed by Cygnus Solutions, which was later acquired by Red Hat (now part of IBM), to port the GNU toolchain to Win32, including the GNU Compiler Suite. Rather than rewrite the tools to use the Win32 runtime environment, Cygwin implemented a POSIX-compatible environment in the form of a DLL. The brand motto is "Get that Linux feeling – on Windows", although Cygwin doesn't have Linux in it. History. Cygwin began in 1995 as a project of Steve Chamberlain, a Cygnus engineer who observed that Windows NT and 95 used COFF as their object file format, and that GNU already included support for x86 and COFF, and the C library newlib. He thought that it would be possible to retarget GCC and produce a cross compiler generating executables that could run on Windows. A prototype was later developed. Chamberlain bootstrapped the compiler on a Windows system, to emulate Unix to let the GNU configure shell script run. Initially, Cygwin was called "Cygwin32". When Microsoft registered the trademark Win32, the "32" was dropped to simply become "Cygwin". In 1999, Cygnus offered Cygwin 1.0 as a commercial product. Subsequent versions have not been released, instead relying on continued open source releases. Geoffrey Noer was the project lead from 1996 to 1999. Christopher Faylor was lead from 1999 to 2004; he left Red Hat and became co-lead with Corinna Vinschen. Corinna Vinschen has been the project lead from mid-2014 to date (as of September, 2024). From June 23, 2016, the Cygwin library version 2.5.2 was licensed under the GNU Lesser General Public License (LGPL) version 3. Description. Cygwin is provided in two versions: the full 64-bit version and a stripped-down 32-bit version, whose final version was released in 2022. Cygwin consists of a library that implements the POSIX system call API in terms of Windows system calls to enable the running of a large number of application programs equivalent to those on Unix systems, and a GNU development toolchain (including GCC and GDB). Programmers have ported the X Window System, K Desktop Environment 3, GNOME, Apache, and TeX. Cygwin permits installing inetd, syslogd, sshd, Apache, and other daemons as standard Windows services. Cygwin programs have full access to the Windows API and other Windows libraries. Cygwin programs are installed by running Cygwin's "setup" program, which downloads them from repositories on the Internet. The Cygwin API library is licensed under the GNU Lesser General Public License version 3 (or later), with an exception to allow linking to any free and open-source software whose license conforms to the Open Source Definition. Cygwin consists of two parts: Cygwin supports POSIX symbolic links, representing them as plain-text files with the system attribute set. Cygwin 1.5 represented them as Windows Explorer shortcuts, but this was changed for reasons of performance and POSIX correctness. Cygwin also recognises NTFS junction points and symbolic links and treats them as POSIX symbolic links, but it does not create them. The POSIX API for handling access control lists (ACLs) is supported. Technical details. A Cygwin-specific version of the Unix codice_6 command allows mounting Windows paths as "filesystems" in the Unix file space. Initial mount points can be configured in codice_7, which has a format very similar to Unix systems, except that Windows paths appear in place of devices. Filesystems can be mounted in binary mode (by default), or in text mode, which enables automatic conversion between LF and CRLF endings (which only affects programs that open files without explicitly specifying text or binary mode). Cygwin 1.7 introduced comprehensive support for POSIX locales, and the UTF-8 Unicode encoding became the default. The fork system call for duplicating a process is fully implemented, but the copy-on-write optimization strategy could not be used. Cygwin's default user interface is the bash shell running in the mintty terminal emulator. The DLL also implements pseudo terminal (pty) devices, and Cygwin ships with a number of terminal emulators that are based on them, including rxvt/urxvt and xterm. The version of GCC that comes with Cygwin has various extensions for creating Windows DLLs, such as specifying whether a program is a windowing or console-mode program. Support for compiling programs that do not require the POSIX compatibility layer provided by the Cygwin DLL used to be included in the default GCC, but , it is provided by cross-compilers contributed by the MinGW-w64 project. Software packages. Cygwin's base package selection is approximately 100MB, containing the bash (interactive user) and dash (installation) shells and the core file and text manipulation utilities. Additional packages are available as optional installs from within the Cygwin "setup" program and package manager ("setup-x86_64.exe" – 64 bit). The Cygwin Ports project provided additional packages that were not available in the Cygwin distribution itself. Examples included GNOME, K Desktop Environment 3, MySQL database, and the PHP scripting language. Most ports have been adopted by volunteer maintainers as Cygwin packages, and Cygwin Ports are no longer maintained. Cygwin ships with GTK+ and Qt. The Cygwin/X project allows graphical Unix programs to display their user interfaces on the Windows desktop for both local and remote programs.
6845
1294673636
https://en.wikipedia.org/wiki?curid=6845
Corinth
Corinth ( ; , ) is a municipality in Corinthia in Greece. The successor to the ancient city of Corinth, it is a former municipality in Corinthia, Peloponnese, which is located in south-central Greece. Since the 2011 local government reform, it has been part of the municipality of Corinth, of which it is the seat and a municipal unit. It is the capital of Corinthia. It was founded as Nea Korinthos (), or New Corinth, in 1858 after an earthquake destroyed the existing settlement of Corinth, which had developed in and around the site of the ancient city. History. Corinth derives its name from Ancient Corinth, a city-state of antiquity. The site was occupied from before 3000 BC. Ancient Greece. Historical references begin with the early 8th century BC, when ancient Corinth began to develop as a commercial center. Between the 8th and 7th centuries, the Bacchiad family ruled Corinth. Cypselus overthrew the Bacchiad family, and between 657 and 585 BC, he and his son Periander ruled Corinth as the Tyrants. In about 585 BC, an oligarchical government seized power. This government later allied with Sparta within the Peloponnesian League, and Corinth participated in the Persian Wars and Peloponnesian War as an ally of Sparta. After Sparta's victory in the Peloponnesian war, the two allies fell out with one another, and Corinth pursued an independent policy in the various wars of the early 4th century BC. After the Macedonian conquest of Greece, the Acrocorinth was the seat of a Macedonian garrison until 243 BC, when the city joined the Achaean League. Ancient Rome. Nearly a century later, in 146 BC, Corinth was captured and was completely destroyed by the Roman army. As a newly rebuilt Roman colony in 44 BC, Corinth flourished and became the administrative capital of the Roman province of Achaea. Medieval times. A major earthquake struck Corinth and its region in 856, causing around 45,000 deaths. Modern era. In 1858, the old city, now known as Ancient Corinth (Αρχαία Κόρινθος, "Archaia Korinthos"), located southwest of the modern city, was totally destroyed by a magnitude 6.5 earthquake. New Corinth ("Nea Korinthos") was then built to the north-east of it, on the coast of the Gulf of Corinth. In 1928, a magnitude 6.3 earthquake devastated the new city, which was then rebuilt on the same site. In 1933, there was a great fire, and the new city was rebuilt again. During the German occupation in World War II, the Germans operated a Dulag transit camp for British, Australian, New Zealander and Serbian prisoners of war and a forced labour camp in the town. Geography. Located about west of Athens, Corinth is surrounded by the coastal townlets of (clockwise) Lechaio, Isthmia, Kechries, and the inland townlets of Examilia and the archaeological site and village of ancient Corinth. Natural features around the city include the narrow coastal plain of Vocha, the Corinthian Gulf, the Isthmus of Corinth cut by its canal, the Saronic Gulf, the Oneia Mountains, and the monolithic rock of Acrocorinth, where the medieval acropolis was built. Climate. According to the nearby weather station of Velo, operated by the Hellenic National Meteorological Service, Corinth has a hot-summer Mediterranean climate (Köppen climate classification: "Csa"), with hot, dry summers and cool, rainy winters. The hottest month is July with an average temperature of while the coldest month is January with an average temperature of . Corinth receives about 463 mm of rainfall per year and has an average annual temperature of . Demographics. The Municipality of Corinth (Δήμος Κορινθίων) had a population of 55,941 according to the 2021 census, the second most populous municipality in the Peloponnese Region after Kalamata. The municipal unit of Corinth had 38,485 inhabitants, of which Corinth itself had 30,816 inhabitants, placing it in second place behind Kalamata among the cities of the Peloponnese Region. The municipal unit of Corinth (Δημοτική ενότητα Κορινθίων) includes apart from Corinth proper the town of Archaia Korinthos, the town of Examilia, and the smaller settlements of Xylokeriza and Solomos. The municipal unit has an area of 102.187 km2. Economy. Industry. Corinth is a major industrial hub at a national level. The Corinth Refinery is one of the largest oil refining industrial complexes in Europe. Ceramic tiles, copper cables, gums, gypsum, leather, marble, meat products, medical equipment, mineral water and beverages, petroleum products, and salt are produced nearby. , a period of Economic changes commenced as a large pipework complex, a textile factory and a meat packing facility diminished their operations. Transport. Roads. Corinth is a major road hub. The A7 toll motorway for Tripoli and Kalamata, (and Sparta via the A71 toll), branches off the A8/E94 toll motorway from Athens at Corinth. Corinth is the main entry point to the Peloponnesian peninsula, the southernmost area of continental Greece. Bus. KTEL Korinthias provides intercity bus service in the peninsula and to Athens via the Isthmos station southeast of the city center. Local bus service is also available. Railways. The metre gauge railway from Athens and Pireaeus reached Corinth in 1884. This station closed to regular public transport in 2007. In 2005, two years prior, the city was connected to the Athens Suburban Railway, following the completion of the new Corinth railway station. The journey time from Athens to Corinth is about 55 minutes. The train station is 5 minutes by car from the city centre and parking is available for free. Port. The port of Corinth, located north of the city centre and close to the northwest entrance of the Corinth Canal, at 37 56.0’ N / 22 56.0’ E, serves the local needs of industry and agriculture. It is mainly a cargo exporting facility. It is an artificial harbour (depth approximately , protected by a concrete mole (length approximately 930 metres, width 100 metres, mole surface 93,000 m2). A new pier finished in the late 1980s doubled the capacity of the port. The reinforced mole protects anchored vessels from strong northern winds. Within the port operates a customs office facility and a Hellenic Coast Guard post. Sea traffic is limited to trade in the export of local produce, mainly citrus fruits, grapes, marble, aggregates and some domestic imports. The port operates as a contingency facility for general cargo ships, bulk carriers and ROROs, in case of strikes at Piraeus port. Ferries. There was formerly a ferry link to Catania, Sicily and Genoa in Italy. Canal. The Corinth Canal, carrying ship traffic between the western Mediterranean Sea and the Aegean Sea, is about east of the city, cutting through the Isthmus of Corinth that connects the Peloponnesian peninsula to the Greek mainland, thus effectively making the former an island. The builders dug the canal through the Isthmus at sea level; no locks are employed. It is in length and only wide at its base, making it impassable for most modern ships. It now has little economic importance. The canal was mooted in ancient times and an abortive effort was made to dig it in around 600 BC by Periander which led him to pave the Diolkos highway instead. Julius Caesar and Caligula both considered digging the canal but died before starting the construction. The emperor Nero then directed the project, which consisted initially of a workforce of 6,000 Jewish prisoners of war, but it was interrupted because of his death. The project resumed only in 1882, after Greece gained independence from the Ottoman Empire, but was hampered by geological and financial problems that bankrupted the original builders. It was finally completed in 1893, but due to the canal's narrowness, navigational problems and periodic closures to repair landslips from its steep walls, it failed to attract the level of traffic anticipated by its operators. It is now used mainly for tourist traffic. Sport. The city's association football team is Korinthos F.C. ("Π.Α.E. Κόρινθος"), established in 1999 after the merger of Pankorinthian Football Club ("Παγκορινθιακός") and Corinth Football Club ("Κόρινθος"). During the 2006–2007 season, the team played in the Greek Fourth Division's Regional Group 7. The team went undefeated that season and it earned the top spot. This granted the team a promotion to the Gamma Ethnikí (Third Division) for the 2007–2008 season. For the 2008–2009 season, Korinthos F.C. competed in the Gamma Ethniki (Third Division) southern grouping. Twin towns/sister cities. Corinth is twinned with: Other locations named after Corinth. Due to its ancient history and the presence of St. Paul the Apostle in Corinth some locations all over the world have been named Corinth.
6846
1021097
https://en.wikipedia.org/wiki?curid=6846
Colossae
Colossae (; ), sometimes called Colosse, was an ancient city of Phrygia in southern Asia Minor (Anatolia), Turkey. The Epistle to the Colossians, an early Christian text which identifies its author as Paul the Apostle, is addressed to the church in Colossae. A significant city from the 5th century BC onwards, it had dwindled in importance by the time of Paul, and was notable for the existence of its local angel cult. It was part of the Roman and Byzantine province of Phrygia Pacatiana, before being destroyed in 1192/3 and its population relocating to nearby "Chonae" (Chonai, modern-day Honaz). Location and geography. Colossae was in Phrygia, in Asia Minor. It was located southeast of Laodicea on the road through the Lycus Valley near the Lycus River at the foot of Mt. Cadmus, the highest mountain in Turkey's western Aegean Region, and between the cities Sardeis and Celaenae, and southeast of the ancient city of Hierapolis. Herodotus said that at Colossae "the river Lycos falls into an opening of the earth and disappears from view, and then after an interval of about five furlongs it comes up to view again, and this river also flows into the Meander River" Colossae has been distinguished in modern research from nearby "Chonai" (), called Honaz in modern times, with what remains of the buried ruins of Colossae ("the mound") lying to the north of Honaz. Origin and etymology of place name. The medieval poet Manuel Philes incorrectly said that the name "Colossae" was connected to the Colossus of Rhodes. More recently, in an interpretation that ties Colossae to an Indo-European root that happens to be shared with the word "kolossos", Jean-Pierre Vernant has connected the name to the idea of setting up a sacred space or shrine. Another proposal relates the name to the Greek "kolazo" 'to punish'. Others have said the name derives from its manufacture of dyed wool, or "colossinus". History. Before the Pauline period. The first mention of the city may be in a 17th-century BC Hittite inscription, which speaks of a city called Huwalušija, which some archeologists believe is a reference to early Colossae. The 5th-century geographer Herodotus first mentions Colossae by name and said it was a "great city in Phrygia", which accommodates the Persian king Xerxes I while en route to wage war against the Greeks in the Greco-Persian Wars, showing the city had already reached a certain level of wealth and size by this time. Writing in the 5th century BC, Xenophon said Colossae was "a populous city, wealthy and of considerable magnitude". Strabo said the city drew great revenue from its sheep, and that the wool of Colossae gave its name to a colour, "colossinus". In 396 BC Colossae was the site of the execution of the rebellious Persian satrap Tissaphernes, who was lured there and slain by an agent of the party of Cyrus the Younger. Pauline period. During the Hellenistic period, the town was of some mercantile importance. By the 1st century it had dwindled greatly in size and significance. Paul's letter to the Colossians points to the existence of an early Christian community. Colossae was home to the miracle near the Archangel church, where a sacristan named Archipos witnessed, how the Archangel Michael thwarted a plan by the heathens to destroy the church by flooding it with the waters of near-by mountain rivers. The Eastern Orthodox Church commemorates this feast on 6(19) September. The canonical biblical text Epistle to the Colossians is addressed to the Christian community in Colossae. The epistle has traditionally been attributed to Paul the Apostle due to its autobiographical salutation and style, but some modern critical scholars now believe it to be written by another author some time after Paul's death. It is believed that one aim of the letter was to address the challenges that the Colossian community faced in its context of the syncretistic Gnostic religions that were developing in Asia Minor. According to the Epistle to the Colossians, Epaphras seems to have been a person of some importance in the Christian community in Colossae, and tradition presents him as its first bishop. The epistle also seems to imply that Paul had never visited the city, because it only speaks of him having "heard" of the Colossians' faith, and in the Epistle to Philemon Paul tells Philemon of his hope to visit Colossae upon being freed from prison. Tradition also gives Philemon as the second bishop of the see. The city was decimated by an earthquake in the 60s AD, and was rebuilt independent of the support of Rome. The Apostolic Constitutions list Philemon as a bishop of Colossae. On the other hand, the "Catholic Encyclopedia" considers Philemon doubtful. The first historically documented bishop is Epiphanius, who was not personally at the Council of Chalcedon, but whose metropolitan bishop Nunechius of Laodicea, the capital of the Roman province of Phrygia Pacatiana, signed the acts on his behalf. Byzantine period and decline. The city's fame and renowned status continued into the Byzantine period, and in 858, it was distinguished as a Metropolitan See. The Byzantines also built the church of St. Michael in the vicinity of Colossae, one of the largest church buildings in the Middle East. Nevertheless, sources suggest that the town may have decreased in size or may even been completely abandoned due to Arab invasions in the seventh and eighth centuries, forcing the population to flee to resettle in the nearby city of Chonai (modern day Honaz). Colossae's church was destroyed in 1192/3 during the Byzantine civil wars. It was a suffragan diocese of Laodicea in Phyrigia Pacatiana but was replaced in the Byzantine period by the Chonae settlement on higher ground. Modern study and archeology. Most archeological attention has been focused on nearby Laodicea and Hierapolis. Excavations of Colossae began in 2021 led by Bariş Yener of Pammukale University in Denizli. The first several years involve surface surveys to analyze pottery and survey the landscape. They hope to start digging in 2023–24. The site exhibits a biconical acropolis almost high, and encompasses an area of almost . On the eastern slope there sits a theater which probably seated around 5,000 people, suggesting a total population of 25,000–30,000 people. The theater was probably built during the Roman period, and may be near an agora that abuts the "cardo maximus", or the city's main north–south road. Ceramic finds around the theater confirm the city's early occupation in the third and second millennia BC. Northeast of the tell, and most likely outside the city walls, a necropolis displays Hellenistic tombs with two main styles of burial: one with an antecedent room connected to an inner chamber, and tumuli, or underground chambers accessed by stairs leading to the entrance. Outside the tell, there are also remains of sections of columns that may have marked a processional way, or the "cardo". Today, the remains of one column marks the location where locals believe a church once stood, possibly that of St. Michael. Near the Lycus River, there is evidence that water channels had been cut out of the rock with a complex of pipes and sluice gates to divert water for bathing and for agricultural and industrial purposes. Modern legacy. The holiness and healing properties associated with the waters of Colossae during the Byzantine era continue to this day, particularly at a pool fed by the Lycus River at the Göz picnic grounds west of Colossae at the foot of Mt. Cadmus. Locals consider the water to be therapeutic.
6848
34162051
https://en.wikipedia.org/wiki?curid=6848
Charge of the Goddess
The Charge of the Goddess (or Charge of the Star Goddess) is an inspirational text often used in the neopagan religion of Wicca. The Charge of the Goddess is recited during most rituals in which the Wiccan priest/priestess is expected to represent, and/or embody, the Goddess within the sacred circle, and is often spoken by the High Priest/Priestess after the ritual of Drawing Down the Moon. The Charge is the promise of the Goddess (who is embodied by the high priestess) to all witches that she will teach and guide them. It has been called "perhaps the most important single theological document in the neo-Pagan movement". It is used not only in Wicca, but as part of the foundational documents of the Reclaiming tradition of witchcraft co-founded by Starhawk. Several versions of the Charge exist, though they all have the same basic premise, that of a set of instructions given by the Great Goddess to her worshippers. The earliest version is that compiled by Gerald Gardner. This version, titled "Leviter Veslis" or "Lift Up the Veil", includes material paraphrased from works by Aleister Crowley, primarily from Liber AL (The Book of the Law, particularly from Ch 1, spoken by Nuit, the Star Goddess), and from Liber LXV (The Book of the Heart Girt with a Serpent) and from Crowley's essay "The Law of Liberty", thus linking modern Wicca to the cosmology and revelations of Thelema. It has been shown that Gerald Gardner's book collection included a copy of Crowley's "The Blue Equinox" (1919) which includes all of the Crowley quotations transferred by Gardner to the Charge of the Goddess. There are also two versions written by Doreen Valiente in the mid-1950s, after her 1953 Wiccan initiation. The first was a poetic paraphrase which eliminated almost all the material derived from Leland and Crowley. The second was a prose version which is contained within the traditional Gardnerian Book of Shadows and more closely resembles Gardner's "Leviter Veslis" version of 1949. Several different versions of a Wiccan Charge of the God have since been created to mirror and accompany the Charge of the Goddess. Themes. The opening paragraph names a collection of goddesses, some derived from Greek or Roman mythology, others from Celtic or Arthurian legends, affirming a belief that these various figures represent a single Great Mother: This theme echoes the ancient Roman belief that the Goddess Isis was known by ten thousand names and also that the Goddess still worshipped today by Wiccans and other neopagans is known under many guises but is in fact one universal divinity. The second paragraph is largely derived and paraphrased from the words that Aradia, the messianic daughter of Diana, speaks to her followers in Charles Godfrey Leland's 1899 book "Aradia, or the Gospel of the Witches" (London: David Nutt; various reprints). The third paragraph is largely written by Doreen Valiente, with a significant content of phrases loosely from "The Book of the Law" and "The Book of the Heart Girt with the Serpent" by Aleister Crowley. The charge affirms that "all" acts of love and pleasure are sacred to the Goddess, e.g.: History. Ancient precedents. In book eleven, chapter 47 of Apuleius's "The Golden Ass", Isis delivers what Ceisiwr Serith calls "essentially a charge of a goddess". This is rather different from the modern version known in Wicca, though they have the same premise, that of the rules given by a great Mother Goddess to her faithful. The Charge of the Goddess is also known under the title "Leviter Veslis". This has been identified by the historian Ronald Hutton, cited in an article by Roger Dearnsley "The Influence of Aleister Crowley on "Ye Bok of Ye Art Magical", as a piece of medieval ecclesiastical Latin used to mean "lifting the veil." However, Hutton's interpretation does not reflect the Latin grammar as it currently stands. It may represent Gardner's attempt to write "Levetur Velis", which has the literal meaning of "Let the veil be lifted." This expression would, by coincidence or design, grammatically echo the famous "fiat lux" ("Gen. 1:3") of the Latin Vulgate. Origins. The earliest known Wiccan version is found in a document dating from the late 1940s, Gerald Gardner's ritual notebook titled "Ye Bok of Ye Art Magical". The oldest identifiable source contained in this version is the final line, which is traceable to the 17th-century "Centrum Naturae Concentratum" of Alipili (or Ali Puli). This version also draws extensively from Charles Godfrey Leland's "Aradia, or the Gospel of the Witches" (1899) and other modern sources, particularly from the works of Aleister Crowley. It is believed to have been compiled by Gerald Gardner or possibly another member of the New Forest coven. Gardner intended his version to be a theological statement justifying the Gardnerian sequence of initiations. Like the Charge found in Freemasonry, where the charge is a set of instructions read to a candidate standing in a temple, the Charge of the Goddess was intended to be read immediately before an initiation. Valiente felt that the influence of Crowley on the Charge was too obvious, and she did not want "the Craft" (a common term for Wicca) associated with Crowley. Gardner invited her to rewrite the Charge. She proceeded to do so, her first version being into verse. The initial verse version by Doreen Valiente consisted of eight verses, the second of which was: Valiente was unhappy with this version, saying that "people seemed to have some difficulty with this, because of the various goddess-names which they found hard to pronounce", and so she rewrote it as a prose version, much of which differs from her initial version, and is more akin to Gardner's version. This prose version has since been modified and reproduced widely by other authors.
6849
4268551
https://en.wikipedia.org/wiki?curid=6849
Cy Young
Denton True "Cy" Young (March 29, 1867 – November 4, 1955) was an American Major League Baseball (MLB) pitcher. Born in Gilmore, Ohio, he worked on his family's farm as a youth before starting his professional baseball career. Young entered the major leagues in 1890 with the National League's Cleveland Spiders and pitched for them until 1898. He was then transferred to the St. Louis Cardinals franchise. In 1901, Young jumped to the American League and played for the Boston Red Sox franchise until 1908, helping them win the 1903 World Series. He finished his career with the Cleveland Naps and Boston Rustlers, retiring in 1911. Young was one of the hardest-throwing pitchers in the game early in his career. After his speed diminished, he relied more on his control and remained effective into his forties. By the time Young retired, he had established numerous pitching records, some of which have stood for over a century. He holds MLB records for the most career wins, with 511, along with most career losses, earned runs, hits allowed, innings pitched, games started, batters faced, and complete games. He led his league in wins during five seasons and pitched three no-hitters, including a perfect game in 1904. In 1937, Young was elected to the National Baseball Hall of Fame. He is often regarded as one of the greatest pitchers of all time, as well as a pioneer of modern pitching. In 1956, one year after his death, the Cy Young Award was created to annually honor the best pitcher in the Major Leagues (later each League) of the previous season, cementing his name as synonymous with excellence in pitching. Early life. Cy Young was the oldest child born to Nancy (Mottmiller) and McKinzie Young Jr., and was christened Denton True Young. He was of part German descent. The couple had four more children: Jesse Carlton, Alonzo, Ella, and Anthony. When the couple married, McKinzie's father gave him the of farm land he owned. Young was born in Gilmore, a tiny farming community located in Washington Township, Tuscarawas County, Ohio. He was raised on one of the local farms and went by the name Dent Young in his early years. Young was also known as "Farmer Young" and "Farmboy Young". Young stopped his formal education after he completed the sixth grade so he could help out on the family's farm. In 1885, Young moved with his father to Nebraska, and in the summer of 1887, they returned to Gilmore. Young played for many amateur baseball leagues during his youth, including a semi-professional Carrollton team in 1888. Young pitched and played second base. The first box score known containing the name Young came from that season. In that game, Young played first base and had three hits in three at-bats. After the season, Young received an offer to play for the minor league Canton team, which started Young's professional career. Professional career. Minor leagues. Young began his professional career in 1890 with the Canton, Ohio based Canton Nadjys, team of the Tri-State League, a professional minor league. During his tryout, Young impressed the scouts, recalling years later, "I almost tore the boards off the grandstand with my fast ball." Cy Young's nickname came from the fences that he had destroyed using his fastball. The fences looked like a cyclone had hit them. Reporters later shortened the name to "Cy", which became the nickname Young used for the rest of his life. During his one year with Canton, he was 15-15. Franchises in the National League, the major professional baseball league at the time, wanted the best players available to them. Therefore, in 1890, Young signed with the Cleveland Spiders, a team that had moved from the American Association to the National League the previous year. Cleveland Spiders (1890–1898). On August 6, 1890, Young's major league debut, he pitched a three-hit 8–1 victory over the Chicago Colts. While Young was with the Spiders, Chief Zimmer was his catcher more often than any other player. Bill James, a baseball statistician, estimated that Zimmer caught Young in more games than any other battery in baseball history. Early on, Young established himself as one of the harder-throwing pitchers in the game. Bill James wrote that Zimmer often put a piece of beefsteak inside his baseball glove to protect his catching hand from Young's fastball. In the absence of radar guns, however, it is impossible to say just how hard Young actually threw. Young continued to perform at a high level during the 1890 season. On the last day of the season, Young won both games of a doubleheader. In the first weeks of Young's career, Cap Anson, the player-manager of the Chicago Colts spotted Young's ability. Anson told Spiders manager Gus Schmelz, "He's too green to do your club much good, but I believe if I taught him what I know, I might make a pitcher out of him in a couple of years. He's not worth it now, but I'm willing to give you $1,000 ($ today) for him." Schmelz replied, "Cap, you can keep your thousand and we'll keep the rube." Two years after Young's debut, the National League moved the pitcher's position back by . Since 1881, pitchers had pitched within a "box" whose front line was from home base, and since 1887 they had been compelled to toe the back line of the box when delivering the ball. The back line was away from home. In 1893, was added to the back line, yielding the modern pitching distance of . In the book "The Neyer/James Guide to Pitchers", sports journalist Rob Neyer wrote that the speed with which pitchers like Cy Young, Amos Rusie, and Jouett Meekin threw was the impetus that caused the move. The 1892 regular season was a success for Young, who led the National League in wins (36), ERA (1.93), and shutouts (9). Just as many contemporary Minor League Baseball leagues operate today, the National League was using a split season format during the 1892 season. The Boston Beaneaters won the first half and the Spiders won the second-half, with a best-of-nine series determining the league champion. Despite the Spiders' second-half run, the Beaneaters swept the series, five games to none. Young pitched three complete games: he lost two and one ended in a scoreless tie. The Spiders faced the Baltimore Orioles in the Temple Cup, a precursor to the World Series, in 1895. Young won three games in the series and Cleveland won the Cup, four games to one. It was around this time that Young added what he called a "slow ball" to his pitching repertoire to reduce stress on his arm. The pitch today is called a changeup. In 1896, Young lost a no-hitter with two outs in the ninth inning when Ed Delahanty of the Philadelphia Phillies hit a single. On September 18, 1897, Young pitched the first no-hitter of his career in a game against the Cincinnati Reds. Although Young did not walk a batter, the Spiders committed four errors while on defense. One of the errors had originally been ruled a hit, but the Cleveland third baseman sent a note to the press box after the eighth inning, saying he had made an error, and the ruling was changed. Young later said that, despite his teammate's gesture, he considered the game to be a one-hitter. St. Louis Perfectos / Cardinals (1899–1900). Prior to the 1899 season, Frank Robison, the Spiders owner, bought the St. Louis Browns, thus owning two clubs simultaneously. The Browns were renamed the "Perfectos", and restocked with Cleveland talent. Just weeks before the season opener, most of the better Spiders players were transferred to St. Louis, including three future Hall of Famers: Young, Jesse Burkett, and Bobby Wallace. The roster maneuvers failed to create a powerhouse Perfectos team, as St. Louis finished fifth in both 1899 and 1900. Meanwhile, the depleted Spiders lost 134 games, the most in MLB history, before folding. Young spent two years with St. Louis, which is where he found his favorite catcher, Lou Criger. The two men were teammates for a decade. Boston Americans / Red Sox (1901–1908). In 1901, the rival American League declared major league status and set about raiding National League rosters. Young left St. Louis and joined the American League's Boston Americans for a $3,500 contract ($ today). Young would remain with the Boston team until 1909. In his first year in the American League, Young was dominant. Pitching to Criger, who had also jumped to Boston, Young led the league in wins, strikeouts, and ERA, thus earning the colloquial AL Triple Crown for pitchers. Young won almost 42% of his team's games in 1901, accounting for 33 of his team's 79 wins. In February 1902, before the start of the baseball season, Young served as a pitching coach at Harvard University. The sixth-grade graduate instructing Harvard students delighted Boston newspapers. The following year, Young coached at Mercer University during the spring. The team went on to win the Georgia state championship in 1903, 1904, and 1905. The Boston Americans played the Pittsburgh Pirates in the first modern World Series in 1903. Young, who started Game One against the visiting Pirates, thus threw the first pitch in modern World Series history. The Pirates scored four runs in that first inning, and Young lost the game. Young performed better in subsequent games, winning his next two starts. He also drove in three runs in Game Five. Young finished the series with a 2–1 record and a 1.85 ERA in four appearances, and Boston defeated Pittsburgh, five games to three. After one-hitting Boston on May 2, 1904, Philadelphia Athletics pitcher Rube Waddell taunted Young to face him so that he could repeat his performance against Boston's ace. Three days later, Young pitched a perfect game against Waddell and the Athletics. It was the first perfect game in American League history. Waddell was the 27th and last batter, and when he flied out, Young shouted, "How do you like that, you hayseed?" Waddell had picked an inauspicious time to issue his challenge. Young's perfect game was the centerpiece of a pitching streak. Young set major league records for the most consecutive scoreless innings pitched and the most consecutive innings without allowing a hit; the latter record still stands at innings, or 76 hitless batters. Even after he allowed a hit, Young's scoreless streak reached a then-record 45 shutout innings. Before Young, only two pitchers had thrown perfect games. This occurred in 1880, when Lee Richmond and John Montgomery Ward pitched perfect games within five days of each other, although under somewhat different rules: the front edge of the pitcher's box was only from home base (the modern release point is about farther away); walks required eight balls; and pitchers were obliged to throw side-armed. Young's perfect game was the first under the modern rules established in 1893. One year later, on July 4, 1905, Rube Waddell beat Young and the Americans, 4–2, in a 20-inning matchup. Young pitched 13 consecutive scoreless innings before he gave up a pair of unearned runs in the final inning. Young did not walk a batter and was later quoted: "For my part, I think it was the greatest game of ball I ever took part in." In 1907, Young and Waddell faced off in a scoreless 13-inning tie. In 1908, Young pitched the third no-hitter of his career. Three months past his 41st birthday, he was the oldest pitcher to record a no-hitter, a record which would stand 82 years until 43-year-old Nolan Ryan broke it. Only a walk kept Young from his second perfect game. After that runner was caught stealing, no other batter reached base. At the time, Young was the second-oldest player in either league. In another game one month before his no-hitter, he allowed just one single while facing 28 batters. On August 13, 1908, the league celebrated "Cy Young Day". No American League games were played on that day, and a group of All-Stars from the league's other teams gathered in Boston to play against Young and the Red Sox. When the season ended, he posted a 1.26 ERA, which gave him not only the lowest in his career, but also a major league record of being the oldest pitcher with 150+ innings and an ERA under 1.50. Cleveland Naps (1909–1911). Young was traded back to Cleveland, the place where he played over half his career, before the 1909 season, to the Cleveland Naps of the American League. The following season, 1910, he won his 500th career game on July 19 against Washington. Boston Rustlers (1911) and retirement. He split 1911, his final year, between the Naps and the Boston Rustlers. On September 22, 1911, Young shut out the Pittsburgh Pirates, 1–0, for his last career victory. In his final start two weeks later, the last eight batters of Young's career combined to hit a triple, four singles, and three doubles. By the time of his retirement, Young's control had faltered. He had also gained weight. In two of his last three years, he was the oldest player in the league. Career accomplishments. Young established numerous pitching records, some of which have stood for over a century. Young compiled 511 wins, which is the most in major league history and 94 ahead of Walter Johnson, second on the list. At the time of Young's retirement, Pud Galvin had the second most career wins with 364. In addition to wins, Young still holds the major league records for most career innings pitched (7,356), most career games started (815), and most complete games (749). He also retired with 316 losses, the most in MLB history. Young's career record for strikeouts was broken by Johnson in 1921. Young's 76 career shutouts are fourth all-time. Young led his league in wins five times (1892, 1895, and 1901–1903), finishing second twice. His career high was 36 in 1892. He won at least 30 games in a season five times. He had 15 seasons with 20 or more wins, two more than Christy Mathewson and Warren Spahn. Young won two ERA titles during his career, in 1892 (1.93) and in 1901 (1.62), and was three times the runner-up. Young's earned run average was below 2.00 six times, but it was not uncommon during the dead-ball era. Although Young threw over 400 innings in each of his first four full seasons, he did not lead his league until 1902. He had 40 or more complete games nine times. Young also led his league in strikeouts twice (140 in 1896 and 158 in 1901), and in shutouts seven times. Young led his league in fewest walks per nine innings fourteen times and finished second once. Only twice in his 22-year career did he finish lower than 5th in the category. Although the WHIP ratio was not calculated until well after Young's death, he was retroactively league leader seven times and was second or third another seven times. Young is tied with Roger Clemens for the most career wins by a Boston Red Sox pitcher: they each won 192 games while with the franchise. In addition, Young pitched three no-hitters, including the third perfect game in baseball history, first in baseball's "modern era". Young also was an above average hitting pitcher. He posted a .210 batting average (623-for-2960) with 325 runs, 290 RBIs, 18 home runs, and 81 walks. From 1891 through 1905, he drove in 10 or more runs for 15 straight seasons, with a high of 28 in 1896. Pitching style. Particularly after his fastball slowed, Young relied upon his control. He was once quoted as saying, "Some may have thought it was essential to know how to curve a ball before anything else. Experience, to my mind, teaches to the contrary. Any young player who has good control will become a successful curve pitcher long before the pitcher who is endeavoring to master both curves and control at the same time. The curve is merely an accessory to control." In addition to his exceptional control, Young was also a workhorse who avoided injury, owing partly to his ability to pitch in different arm positions (overhand, three-quarters, sidearm and even submarine). For 19 consecutive years, from 1891 through 1909, Young was in his league's top 10 for innings pitched; in 14 of the seasons, he was in the top five. Not until 1900, a decade into his career, did Young pitch two consecutive incomplete games. By habit, Young restricted his practice throws in spring training. "I figured the old arm had just so many throws in it," said Young, "and there wasn't any use wasting them." He once described his approach before a game: I never warmed up ten, fifteen minutes before a game like most pitchers do. I'd loosen up, three, four minutes. Five at the outside. And I never went to the bullpen. Oh, I'd relieve all right, plenty of times, but I went right from the bench to the box, and I'd take a few warm-up pitches and be ready. Then I had good control. I aimed to make the batter hit the ball, and I threw as few pitches as possible. That's why I was able to work every other day. Later life. In 1910, it was reported that Young became a vegetarian, after baseball and working on his farm. In 1913, he served as manager of the Cleveland Green Sox of the Federal League, which was at the time an outlaw league. However, he never worked in baseball after that. Young was a Freemason. In 1916, he ran for county treasurer in Tuscarawas County, Ohio. Young's wife, Roba, whom he had known since childhood, died in 1933. After she died, Young tried several jobs, and eventually moved in with friends John and Ruth Benedum and did odd jobs for them. Young took part in many baseball events after his retirement. In 1937, 26 years after he retired from baseball, Young was inducted into the newly created Baseball Hall of Fame's freshman class. He was among the first to donate mementos to the Hall. By 1940, Young's only source of income was stock dividends of $300 per year ($ today). He appeared on the television show "I've Got a Secret" on April 13, 1955. On November 4, 1955, Young died on the Benedums' farm at the age of 88. He was buried in Peoli, Ohio. Legacy. Young's career is seen as a bridge from baseball's earliest days to its modern era; he pitched against stars such as Cap Anson, already an established player when the National League was first formed in 1876, as well as against Eddie Collins, who played until 1930. When Young's career began, pitchers delivered the baseball underhand and fouls were not counted as strikes. The pitcher's mound was not moved back to its present position of until Young's fourth season; he did not wear a glove until his sixth season. Young was elected to the National Baseball Hall of Fame in 1937. In 1956, about one year after Young's death, the Cy Young Award was created to honor the best pitcher in Major League Baseball for each season. The first award was given to Brooklyn's Don Newcombe. Originally, it was a single award covering all of baseball. The honor was divided into two Cy Young Awards in 1967, one for each league. On September 23, 1993, a statue dedicated to him was unveiled by Northeastern University on the site of the Red Sox's original stadium, the Huntington Avenue Grounds. It was there that Young had pitched the first game of the 1903 World Series, as well as the first perfect game in the modern era of baseball. A home plate-shaped plaque next to the statue reads: On October 1, 1903 the first modern World Series between the American League champion Boston Pilgrims (later known as the Red Sox) and the National League champion Pittsburgh Pirates was played on this site. General admission tickets were fifty cents. The Pilgrims, led by twenty-eight game winner Cy Young, trailed the series three games to one but then swept four consecutive victories to win the championship five games to three. In 1999, 88 years after his final major league appearance and 44 years after his death, editors at "The Sporting News" ranked Young 14th on their list of "Baseball's 100 Greatest Players". That same year, baseball fans named him to the Major League Baseball All-Century Team.
6851
91088
https://en.wikipedia.org/wiki?curid=6851
Coronation Street
Coronation Street (colloquially referred to as Corrie) is a British television soap opera created by Granada Television and shown on ITV since 9 December 1960. The programme centres on a cobbled, terraced street in the fictional town of Weatherfield in Greater Manchester. The location was itself based on Salford, the hometown of the show's first screenwriter and creator, Tony Warren. Originally broadcast twice weekly, "Coronation Street" increased its runtime in later years, currently airing three 22-minute episodes per week. Warren developed the concept for the series, which was initially rejected by Granada's founder Sidney Bernstein. Producer Harry Elton convinced Bernstein to commission 13 pilot episodes. The show has since become a significant part of British culture and underpinned the success of its producing Granada franchise. Currently produced by ITV Studios, the successor to Granada, the series is filmed at MediaCityUK and broadcast across all ITV regions, as well as internationally. In 2010, "Coronation Street" was recognised by "Guinness World Records" as the world's longest-running television soap opera upon its 50th anniversary. "Coronation Street" was originally influenced by kitchen-sink realism and is known for portraying a working-class community with a blend of humour and strong, relatable characters. As of 2025, it averages approximately four million viewers per episode. The series aired its 10,000th episode on 7 February 2020 and marked its 60th anniversary later that year. History. 1960s. The first episode of "Coronation Street" aired on 9 December 1960 at 7 pm. It initially received mixed reviews; "Daily Mirror" columnist Ken Irwin predicted the series would last only three weeks. The "Daily Mirror" also printed: "The programme is doomed from the outset ... For there is little reality in this new serial, which apparently, we have to suffer twice a week." Granada Television had commissioned 13 episodes, with some inside the company doubting the show would last beyond its planned production run. However, viewers quickly connected with the programme's portrayal of relatable, everyday characters. The programme also made use of Northern English language and dialect; affectionate local terms like "eh, chuck?", "nowt" (, from "nought", meaning "nothing"), and "by 'eck!" became widely heard on British television for the first time. Early storylines included student Ken Barlow (William Roache), whose university education set him apart from his working-class family, including his brother David (Alan Rothwell) and parents Frank (Frank Pemberton) and Ida (Noel Dyson). Barlow's character offered commentary on broader social changes, including globalisation, as exemplified by his 1961 line: "You can't go on just thinking about your own street these days. We're living with people on the other side of the world." Roache remains the only original cast member and holds the record as the longest-serving actor in "Coronation Street" and global soap opera history. In March 1961, the show reached number one in the television ratings and remained there for the rest of the year. Earlier that year, a television audience measurement (TAM) showed that 75% of available viewers (approximately 15 million people) watched the programme. By 1964, "Coronation Street" attracted over 20 million regular viewers, with ratings peaking on 2 December 1964, at 21.36 million viewers. In 1964, Tim Aspinall became the series producer and implemented significant changes to the programme. Nine cast members were fired, the first being Lynne Carol, who had played Martha Longhurst since early in "Coronation Street"s run. Carol's firing caused controversy, prompting her co-star Violet Carson (Ena Sharples) to threaten to quit, although she ultimately remained. The sacking was widely covered in the media, and Carol was mobbed by fans while out in public. Some, including "Coronation Street" writer H.V. Kershaw, criticised the decision as a bid to boost ratings. Steve Tanner and Elsie Howard's 1967 wedding had more than 20 million viewers. By 1968, critics contended that the programme offered a nostalgic and outdated depiction of the urban working class, failing to reflect the contemporary realities of British society amid the huge economic and social changes that came during the 1960s decade. Granada considered modernising the show with issue-driven plots, including Lucille Hewitt (Jennifer Moss) becoming addicted to drugs, Jerry Booth (Graham Haberfield) being in a storyline about homosexuality, Emily Nugent (Eileen Derbyshire) having an out-of-wedlock child, and introducing a black family. However, these ideas were abandoned due to concerns about viewer reactions. The first episode filmed in colour was broadcast on 3 November 1969. Since then, all episodes have been produced in colour, with the exception of those created during the Colour Strike. 1970s. Several main cast members departed "Coronation Street" in the early 1970s. In 1970, Arthur Leslie, who played Jack Walker, the landlord of the Rovers Return Inn, died suddenly, and his character was written out shortly thereafter. Anne Reid left the series in 1971, with her character, Valerie Barlow, dying due to accidental electrocution from a faulty hairdryer. In 1973, Pat Phoenix, who played Elsie Tanner, departed, and Doris Speed (Annie Walker) took a two-month leave of absence. During this period, ITV's other flagship soap opera, "Crossroads", experienced an increase in viewership, while "Coronation Street" saw a decline in ratings. The departure of these cast members in the early 1970s prompted the writing team to expand the roles of supporting characters and introduce new ones. Deirdre Hunt (Anne Kirkbride) was introduced in 1972 and became a regular character in 1973. Bet Lynch (Julie Goodyear), who had become a regular character in 1970, became increasingly prominent as the decade progressed. Rita Littlewood (Barbara Knox), who had made a single appearance in 1964, returned and joined the regular cast in 1972. Mavis Riley (Thelma Barlow) became a regular character in 1973. Ivy Tyldesley (Lynne Perrie, later renamed "Tilsley") was introduced as a recurring character in 1971. Longtime characters Gail Potter (Helen Worth), Blanche Hunt (initially played by Patricia Cutts and later by Maggie Jones), and Vera Duckworth (Liz Dawn) were introduced in 1974. Comic storylines, a hallmark of the series in the 1960s, had become less frequent in the early 1970s. They were revived under new producer Bill Podmore, who joined the programme in 1976 after previously working on Granada's comedy productions. In September 1977, the "News of the World" quoted actor Stephen Hancock (Ernest Bishop) as saying "The Street kills an actor. I'm just doing a job, not acting. The scriptwriters have turned me into Ernie Bishop. I've tried to resist it but it is very hard not to play the part all the time, even at home." Hancock also expressed frustration with the payment system, which guaranteed some long-serving actors—including Pat Phoenix, Doris Speed, and Peter Adamson—payment for every episode regardless of their appearances, while others were compensated only for episodes in which they appeared. Hancock's complaints led to a dispute with Podmore, dubbed "The Godfather" by the media, who refused to alter the system. Hancock ultimately resigned. To write out Ernest Bishop while preserving the role of his wife, Emily (Eileen Derbyshire), the writers decided his character would be fatally shot during a payroll robbery at Mike Baldwin's (Johnny Briggs) factory. The episode, which aired on 11 January 1978, marked the first instance of such explicit violence on "Coronation Street", leading to a significant viewer backlash. Granada's switchboard was overwhelmed with complaints, and the Lobby Against TV Violence criticised the decision to air the storyline. Granada defended the plot, emphasising its focus on the grief and loss experienced by Emily. Despite its enduring popularity, critics argued that "Coronation Street" had grown complacent during this period, with the show relying on nostalgic depictions of working-class life rather than addressing contemporary social issues. 1980s. Between 1980 and 1984, "Coronation Street" faced the loss of many original cast members. Violet Carson (Ena Sharples) retired in 1980 and Doris Speed (Annie Walker) retired in 1983, Pat Phoenix (Elsie Tanner) left the programme permanently in 1984. Jack Howarth died in 1984 and his character, Albert Tatlock, was written out off-screen. By May 1984, William Roache (Ken Barlow) was the sole remaining actor from the programme's original cast. Characters like Phyllis Pearce (Jill Summers), Vera and Jack Duckworth (Liz Dawn and Bill Tarmey), and Percy Sugden (Bill Waddington) took on roles reminiscent of earlier characters. The show introduced its first major black character, Shirley Armitage (Lisa Lewis), as a machinist at Baldwin's Casuals in 1983. Established characters were assigned new roles, and new characters were introduced to fill the gaps left by those who departed. Phyllis Pearce (Jill Summers) was hailed as the new Ena Sharples in 1982, the Duckworths moved into No.9 in 1983 and slipped into the role once held by the Ogdens, while Percy Sugden (Bill Waddington) appeared in 1983 and took over the grumpy war veteran role from Albert Tatlock. The question of who would take over the Rovers Return after Annie Walker's 1983 exit was answered in 1985 when Bet Lynch (who also mirrored the vulnerability and strength of Elsie Tanner) was installed as landlady. In 1983, Shirley Armitage (Lisa Lewis) became the first major Black character in the programme. In 1983, Peter Adamson, who had played Len Fairclough since 1961, was dismissed for breaching his contract. Granada had previously warned Adamson for publishing unauthorised newspaper articles that criticised the show and its cast. Producer Bill Podmore terminated Adamson's contract after discovering he had sold his memoirs despite the prior warning. The sacking coincided with allegations of Adamson having indecently assaulted two eight-year-old girls in a swimming pool. Granada Television gave Adamson financial support through his legal problems, with a Crown Court jury finding him not guilty in July 1983. Adamson's dispute over his memoirs and newspaper articles was not known to the public and the media reported that Adamson had been dismissed because of the allegations. Len Fairclough was killed off-screen in a motorway crash while returning home from an affair in December 1983. Adamson celebrated the character's death by delivering an obituary on TV-am dressed as an undertaker. New soap operas began airing on British television in the 1980s, with Channel 4 launching "Brookside" in 1982 and the BBC debuting "EastEnders" in 1985. Both soaps presented a grittier, more contemporary view of British life, contrasting with "Coronation Street"s nostalgic tone. "EastEnders" regularly obtained higher viewing figures than "Coronation Street" due to its omnibus episodes shown at weekends. Despite this, "Coronation Street" maintained strong ratings. Between 1988 and 1989, many aspects of the show were modernised by new producer David Liddiment. A new exterior set had been built in 1982, and in 1989 it was redeveloped to include new houses and shops. Production techniques were also changed with a new studio being built, and the inclusion of more location filming, which had moved exterior scenes from being shot on film to videotape in 1988. Due to new pressures, an introduction of the third weekly episode aired on 20 October 1989, to broadcast each Friday at 7:30 pm. In 1988, Christopher Quinten, who had played Brian Tilsley since 1978, informed Granada of his intention to move to the United States to marry Leeza Gibbons and pursue an acting career in Los Angeles. Quinten sought assurances that his role would remain open for a potential return. However, producers decided that Tilsley would be killed off. Quinten was in Los Angeles when the decision was made and threatened to quit abruptly. Co-star Helen Worth convinced him to film his final scenes. Brian Tilsley's death, aired on 15 February 1989, depicted him being fatally stabbed while defending a young woman outside a nightclub. The storyline attracted viewer complaints, with Mary Whitehouse condemning the portrayal of violence. One of "Coronation Street's" most prominent storylines in the 1980s was the engagement and marriage of Ken Barlow and Deirdre Langton (Anne Kirkbride). In July 1981, their wedding was watched by over 15 million viewers – more viewers than ITV's coverage of the wedding of Prince Charles and Lady Diana Spencer two days later. Deirdre Barlow's affair with Mike Baldwin (Johnny Briggs) in 1983, garnered significant media attention, and began an ongoing feud that followed between Ken Barlow and Mike Baldwin. Other notable marriages included Alf Roberts (Bryan Mosley) to Audrey Potter (Sue Nicholls) in 1985, Mike Baldwin to Ken Barlow's daughter Susan (Wendy Jane Walker) in 1986, Kevin Webster (Michael Le Vell) to Sally Seddon (Sally Whittaker) in 1986, Bet Lynch to Alec Gilroy (Roy Barraclough) in 1987, and Ivy Tilsley to Don Brennan (Geoffrey Hinsliff) in 1988. The long-awaited marriage of Mavis Riley and Derek Wilton (Peter Baldwin) occurred in 1988 after over a decade of on-and-off romance and a failed marriage attempt in 1984. The psychological abuse of Rita Fairclough by Alan Bradley culminated in his death under a Blackpool tram in December 1989, achieving a combined viewership of 26.93 million for the episodes where Alan went into hiding and later tried to kill Rita. Jean Alexander, who played Hilda Ogden on the programme starting in 1964, left "Coronation Street" in 1987. Her final aired on Christmas Day 1987 with a combined audience (original and omnibus) of 26.7 million. Between 1986 and 1989, the storyline of Rita Fairclough's (Barbara Knox) domestic abuse at the hands of her partner Alan Bradley (Mark Eden), followed by his death after being struck by a Blackpool tram in December 1989, unfolded. This plotline brought the show its highest-ever combined viewing figure, with nearly 27 million viewers watching a March 1989 episode where Bradley is on the run from the police after attempting to kill Rita. This record is sometimes mistakenly attributed to the tram death episode aired on 8 December 1989. 1990s. In 1992, William Rees-Mogg, Chairman of the Broadcasting Standards Council, criticised "Coronation Street" for its low representation of ethnic minorities and its nostalgic portrayal of a bygone era. This was seen as unreflective of Greater Manchester, where many neighbourhoods had significant Black and Asian populations. Headlines such as "Coronation Street shuts out blacks" ("The Times") and "'Put colour in t'Street" ("Daily Mirror") reflected the controversy. Patrick Stoddart of "The Times" defended the show, stating: "the millions who watch "Coronation Street" – and who will continue to do so despite Lord Rees-Mogg – know real life when they see it ... in the most confident and accomplished soap opera television has ever seen" While Black and Asian characters had appeared sporadically, the first regular non-white family, the Desai family, was introduced in 1999. In 1990, new characters Des Barnes (Philip Middlemiss) and Steph Barnes (Amelia Bullmore) moved to Coronation Street and were labeled yuppies from the media. Raquel Wolstenhulme (Sarah Lancashire) debuted in 1991 and became one of the era's most popular characters, departing in 1996 with a brief return in 2000. The McDonald family–Liz (Beverley Callard), Jim (Charles Lawson), Steve (Simon Gregson), and Andy (Nicholas Cochrane)–were introduced in 1989 and became major characters in the 1990s. Other notable arrivals included Maud Grimes (Elizabeth Bradley), a wheelchair user and pensioner, in 1993; Roy Cropper (David Neilson), a café owner, in 1995; young married couple Gary and Judy Mallett (Ian Mercer and Gaynor Faye) in 1995; and butcher Fred Elliott (John Savident) in 1994 and his son Ashley Peacock (Steven Arnold) in 1995. The 1990s also saw an increase in slapstick and physical humour, exemplified by comedic characters including Reg Holdsworth (Ken Morley), a supermarket manager. In 1997, Brian Park became producer with a vision to modernise the show and focus on younger characters. On his first day, he axed several long-standing characters, including Derek Wilton (Peter Baldwin), Don Brennan (Geoffrey Hinsliff), Percy Sugden (Bill Waddington), Bill Webster (Peter Armitage), Billy Williams (Frank Mills) and Maureen Holdsworth (Sherrie Hewson). The decision prompted Thelma Barlow, who played Mavis Wilton, to resign in protest at her co-star's dismissal. Several longtime writers, including Barry Hill, Adele Rose, and Julian Roach, resigned during this period. Park introduced younger characters between 1997 and 1998, such as a recast Nick Tilsley (Adam Rickitt), single mother Zoe Tattersall (Joanne Froggatt), and the problematic Battersby family. The show also began addressing more contemporary issues, including drug dealing, eco-activism, and religious cults. Hayley Patterson (Julie Hesmondhalgh), introduced during this era, became the first transgender character in a British soap opera and soon married Roy Cropper. Park, who resigned in 1998, cited this storyline as one of his most significant achievements. The changes divided audiences, with some alienated by the modernised approach. Critics accused "Coronation Street" of losing its traditional charm while trying to emulate edgier rivals like "Brookside" and "EastEnders". Victor Lewis-Smith wrote in the "Daily Mirror": "Apparently it doesn't matter that this is a first-class soap opera, superbly scripted and flawlessly performed by a seasoned repertory company." One of the decade's most famous storylines occurred in 1998, when Deirdre Rachid (Anne Kirkbride) was wrongfully imprisoned after being deceived by con-man Jon Lindsay (Owen Aaronovitch). The episode depicting her sentencing attracted 19 million viewers and inspired the "Free the Weatherfield One" campaign, which generated significant media attention. Then-Prime Minister Tony Blair commented on the fictional case in Parliament. Deirdre was released after three weeks, with Granada confirming that her release had always been planned despite the media frenzy. 2000s. On 8 December 2000, "Coronation Street" celebrated its 40th anniversary with a live, hour-long episode. King Charles III (then Prince of Wales) appeared as himself. Earlier that year, 13-year-old Sarah-Louise Platt (Tina O'Brien) became pregnant, giving birth to a daughter, Bethany, on 4 June. The February episode where Gail was told of her daughter's pregnancy was watched by 15 million viewers. The programme continued to tackle issue-led storylines, including Toyah Battersby (Georgia Taylor) getting raped, Roy and Hayley Cropper (David Neilson and Julie Hesmondhalgh) abducting their foster child, Sarah Platt's Internet chat room abduction, and Alma Halliwell's (Amanda Barrie) 2001 death from cervical cancer. These storylines proved unpopular with viewers and led to a decline in ratings. As a result, in October 2001, producer Jane Macnaught was reassigned, and Carolyn Reynolds took over. In 2002, Kieran Roberts became producer, aiming to reintroduce "gentle storylines and humour," steering the show away from competing with other soaps. In July 2002, Gail Platt married Richard Hillman (Brian Capron), a financial advisor who had left Duggie Ferguson (John Bowe) to die after a fall during an argument, murdered his ex-wife Patricia (Annabelle Apsion), and later killed their neighbour Maxine Peacock (Tracy Shaw). He also attempted to kill his mother-in-law, Audrey Roberts (Sue Nicholls), and longtime family friend, Emily Bishop (Eileen Derbyshire), all for financial gain as his debts mounted. Hillman confessed his crimes to Gail in a two-hander episode in February 2003 before returning weeks later with the intention of killing Gail, her children Sarah and David (Jack P. Shepherd), and granddaughter Bethany by driving them into a canal. While the Platt family survived, Hillman drowned. This storyline received widespread media attention, with viewing figures peaking at 19.4 million. Todd Grimshaw (Bruno Langley) became "Corrie's" first regular homosexual character. In 2003, another gay male character was introduced, Sean Tully (Antony Cotton). The bigamy of Peter Barlow (Chris Gascoyne) and his addiction to alcohol, later in the decade, Maya Sharma's (Sasha Behar) revenge on former lover Dev Alahan (Jimmi Harkishin), Charlie Stubbs's (Bill Ward) psychological abuse of Shelley Unwin (Sally Lindsay), and the deaths of Mike Baldwin (Johnny Briggs), Vera Duckworth (Liz Dawn) and Fred Elliott (John Savident). In 2007, Tracy Barlow (Kate Ford) murdered Charlie Stubbs and claiming it was self-defence; the audience during this storyline peaked at 13.3 million. At the 2007 British Soap Awards, it won Best Storyline, and Ford was voted Best Actress for her portrayal. In July 2007, after 34 years in the role of Vera Duckworth, Liz Dawn left the show due to ill health. After conversation between Dawn and producers Kieran Roberts and Steve Frost, the decision was made to kill Vera off. Tina O'Brien revealed in the British press on 4 April 2007 that she would be leaving "Coronation Street" later in the year. Sarah-Louise, who was involved in some of the decade's most controversial stories, left in December 2007 with her daughter, Bethany. In 2008, Michelle learning that Ryan (Ben Thompson) was not her biological son, having been accidentally swapped at birth with Alex Neeson (Dario Coates). Carla Connor (Alison King) turned to Liam for comfort and developed feelings for him. In spite of knowing about her feelings, Liam married Maria Sutherland (Samia Longchambon). Maria and Liam's baby son was stillborn in April, and during an estrangement from Maria upon the death of their baby, Liam had a one-night stand with Carla, a story which helped pave the way for his departure. In August 2008, Jed Stone (Kenneth Cope) returned after 42 years. Liam Connor and his ex-sister-in-law Carla gave into their feelings for each other and began an affair. Carla's fiancé Tony Gordon (Gray O'Brien) discovered the affair and had Liam killed in a hit-and-run in October. Carla struggled to come to terms with Liam's death, but decided she still loved Tony and married him on 3 December, in an episode attracting 10.3 million viewers. In April 2009 it was revealed that Eileen Grimshaw's (Sue Cleaver) father, Colin (Edward de Souza) – the son of Elsie Tanner's (Pat Phoenix) cousin Arnley – had slept with Eileen's old classmate, Paula Carp (Sharon Duce) while she was still at school, and that Paula's daughter Julie (Katy Cavanagh) was in fact also Colin's daughter. Other stories in 2009 included Maria giving birth to Liam's son and her subsequent relationship with Liam's killer Tony, Steve McDonald's (Simon Gregson) marriage to Becky Granger (Katherine Kelly) and Kevin Webster's (Michael Le Vell) affair with Molly Dobbs (Vicky Binns). On Christmas Day 2009, Sally Webster (Sally Dynevor) told husband Kevin that she had breast cancer, just as he was about to leave her for lover Molly. 2010s. The show began broadcasting in high-definition in May 2010, and on 17 September that year, "Coronation Street" entered "Guinness World Records" as the world's longest-running television soap opera after the American soap opera "As the World Turns" concluded. William Roache was listed as the world's longest-running soap actor. "Coronation Street" 50th anniversary week was celebrated with seven episodes, plus a special one-hour live episode, broadcast from 6–10 December. The episodes averaged 14 million viewers, a 52.1% share of the audience. The anniversary was also publicised with ITV specials and news broadcasts. In the storyline, Nick Tilsley and Leanne Battersby's bar — The Joinery — exploded during Peter Barlow's stag party. As a result, the viaduct was destroyed, sending a Metrolink tram careering onto the street, destroying D&S Alahan's Corner Shop and The Kabin. Two characters, Ashley Peacock (Steven Arnold) and Molly Dobbs (Vicky Binns), along with an unknown taxi driver, were killed as a result of the disaster. Rita Sullivan (Barbara Knox) survived, despite being trapped under the rubble of her destroyed shop. Fiz Stape (Jennie McAlpine) prematurely gave birth to a baby girl, Hope. The episode of "EastEnders" broadcast on the same day as "Coronation Street" 50th anniversary episode included a tribute, with the character Dot Branning (June Brown, who briefly appeared in the show during the 1970s) saying that she never misses an episode of "Coronation Street". 2020s. On 7 February 2020, with its 60th anniversary ten months away, "Coronation Street" aired its landmark 10,000th episode, the runtime of which was extended to 60 minutes. Producers stated that the episode would contain "a nostalgic trip down memory lane" and "a nod to its own past". A month later, ITV announced that production on the soap would have to be suspended, as the United Kingdom was put into a national lockdown due to the COVID-19 pandemic (see impact of the COVID-19 pandemic on television). After an 11-week intermission for all cast and crew members, filming resumed in June 2020. The episodes featured social distancing to adhere to the guidelines set by the British government, and it was confirmed that all actors over 70, as well as those with underlying health conditions, would not be allowed to be on set until it was safe to do so. This included "Coronation Street" veterans William Roache (Ken Barlow) at 88, Barbara Knox (Rita Tanner) at 87, Malcolm Hebden (Norris Cole) at 80 and Sue Nicholls (Audrey Roberts) at 76. It was deemed safe for Maureen Lipman (Evelyn Plummer) and David Neilson (Roy Cropper) to continue. By December, all cast members had returned to set, and on Wednesday 9 December 2020, the soap celebrated its 60th anniversary, with original plans for the episode forced to change due to COVID-19 guidelines. The anniversary week saw the conclusion of a long-running coercive control storyline that began in May 2019, with Geoff Metcalfe (Ian Bartholomew) abusing Yasmeen Nazir (Shelley King). The showdown, which resulted in the death of Geoff allowed social distancing rules to be relaxed on the condition that the crew members involved formed a social bubble prior to the filming. In late 2021, series producer Iain MacLeod announced that the original plans for the 60th anniversary would now take place in a special week of episodes in October 2021. On 12 October 2021, it was announced that "Coronation Street" would partake in a special crossover event involving 7 British soaps to promote the topic of climate change ahead of the 2021 United Nations Climate Change Conference. During the week, beginning from 1 November, social media clips featuring Liam Cavanagh (Jonny McPherson) and Amelia Spencer (Daisy Campbell) from "Emmerdale", as well as Daniel Granger (Matthew Chambers) from "Doctors" were featured on the programme, while events from "Holby City" were also referenced. A similar clip featuring Maria Connor (Samia Longchambon) was also featured on "EastEnders". In June 2024, ITV announced that "Coronation Street"s third longest-serving cast member, Helen Worth, had decided to leave the soap after fifty years of portraying Gail Platt. The character made her departure in December 2024. Following this, several other cast exits began to be confirmed, with a mixture of producers axing the characters and cast members deciding to quit. In what the "Metro" described as a "cast exodus", these have included Sue Cleaver leaving her long-term role as Eileen Grimshaw and Charlotte Jordan leaving her role as Daisy Midgeley, as well as Debbie Webster (Sue Devaney) and Craig Tinker (Colson Smith) being written out of the series. Characters. Since 1960, "Coronation Street" has featured many characters whose popularity with viewers and critics has differed greatly. The original cast was created by Tony Warren, with the characters of Ena Sharples (Violet Carson), Elsie Tanner (Pat Phoenix) and Annie Walker (Doris Speed) as central figures. These three women remained with the show for at least 20 years, and became archetypes of British soap opera, often being emulated by other serials. Ena was the street's busybody, battle-axe and self-proclaimed moral voice. Elsie was the tart with a heart, who was constantly hurt by men in the search for true love. Annie Walker, landlady of the Rovers Return Inn, had delusions of grandeur and saw herself as better than the other residents. "Coronation Street" became known for the portrayal of strong female characters, including original cast characters like Ena, Annie and Elsie, and later Hilda Ogden (Jean Alexander), who first appeared in 1964; all four became household names during the 1960s. Warren's programme was largely matriarchal, which some commentators put down to the female-dominant environment in which he grew up. Consequently, the show has a long tradition of downtrodden husbands, most famously Stan Ogden (Bernard Youens) and Jack Duckworth (Bill Tarmey), husbands of Hilda and Vera Duckworth (Liz Dawn), respectively. Coronation Street's longest-serving character, Ken Barlow (William Roache) entered the storyline as a young radical, reflecting the youth of 1960s Britain, where figures like the Beatles, the Rolling Stones and the model Twiggy were to reshape the concept of youthful rebellion. Though the rest of the original Barlow family were killed off before the end of the 1970s, Ken, who for 27 years was the only character from the first episode remaining, has remained the constant link throughout the entire series. In 2011, Dennis Tanner (Philip Lowrie), another character from the first episode, returned to "Coronation Street" after a 43-year absence. Since 1984, Ken Barlow has been the show's only remaining original character. Emily Bishop (Eileen Derbyshire) had appeared in the series since January 1961, when the show was just weeks old, and was the show's longest-serving female character before she departed in January 2016 after 55 years. Rita Tanner (Barbara Knox) appeared on the show for one episode in December 1964, before returning as a full-time cast member in January 1972. She is currently the second longest-serving original cast member on the show. Roache and Knox are also the two oldest-working cast members on the soap at 92 and 91 years-old respectively. Stan and Hilda Ogden were introduced in 1964, with Hilda becoming one of the most famous British soap opera characters of all time. In a 1982 poll, she was voted fourth-most recognisable woman in Britain, after Queen Elizabeth The Queen Mother, Queen Elizabeth II and Diana, Princess of Wales. Hilda's best-known attributes were her pinny, hair curlers, and the "muriel" in her living room with three "flying" duck ornaments. Hilda Ogden's departure on Christmas Day 1987, remains the highest-rated episode of "Coronation Street" ever, with nearly 27,000,000 viewers. Stan Ogden had been killed off in 1984 following the death of actor Bernard Youens after a long illness which had restricted his appearances towards the end. Bet Lynch (Julie Goodyear) first appeared in 1966, before becoming a regular in 1970, and went on to become one of the most famous "Corrie" characters. Bet stood as the central character of the show from 1985 until departing in 1995, often being dubbed as "Queen of the Street" by the media, and indeed herself. The character briefly returned in June 2002 and November 2003. "Coronation Street" and its characters often rely heavily on archetypes, with the characterisation of some of its current and recent cast based loosely on former characters. Phyllis Pearce (Jill Summers), Blanche Hunt (Maggie Jones) and Sylvia Goodwin (Stephanie Cole) embodied the role of the acid-tongued busybody originally held by Ena, Sally Webster (Sally Dynevor) has grown snobbish, like Annie, and a number of the programme's female characters, such as Carla Connor (Alison King), mirror the vulnerability of Elsie and Bet. Other recurring archetypes include the war veteran such as Albert Tatlock (Jack Howarth), Percy Sugden (Bill Waddington) and Gary Windass (Mikey North), the bumbling retail manager like Leonard Swindley (Arthur Lowe), Reg Holdsworth (Ken Morley) and Norris Cole (Malcolm Hebden), quick-tempered, tough tradesmen like Len Fairclough (Peter Adamson), Jim McDonald (Charles Lawson), Tommy Harris (Thomas Craig) and Owen Armstrong (Ian Puleston-Davies), and the perennial losers such as Stan and Hilda, Jack and Vera, Les Battersby (Bruce Jones), Beth Tinker (Lisa George) and Kirk Sutherland (Andrew Whyment). Villains are also common character types, such as Tracy Barlow (Kate Ford), Alan Bradley (Mark Eden), Jenny Bradley (Sally Ann Matthews), Rob Donovan (Marc Baylis), Frank Foster (Andrew Lancel), Tony Gordon (Gray O'Brien), Caz Hammond (Rhea Bailey), Richard Hillman (Brian Capron), Greg Kelly (Stephen Billington), Will Chatterton (Leon Ockenden), Nathan Curtis (Christopher Harper), Callum Logan (Sean Ward), Karl Munro (John Michie), Pat Phelan (Connor McIntyre), David Platt (Jack P. Shepherd), Maya Sharma (Sasha Behar), Kirsty Soames (Natalie Gumede), John Stape (Graeme Hawley), Geoff Metcalfe (Ian Bartholomew) and Gary Windass (Mikey North). The show's former archivist and scriptwriter Daran Little disagreed with the characterisation of the show as a collection of stereotypes. "Rather, remember that Elsie, Ena and others were the first of their kind ever seen on British television. If later characters are stereotypes, it's because they are from the same original mould. It is the hundreds of programmes that have followed which have copied "Coronation Street"." In 2024, it was reported that the number of actors appearing in each storyline had been cut in order to reduce costs due to declining viewing figures. Storylines. Many topical issues have been tackled on Coronation Street, such as rape, including male and marital, historic sexual abuse, underage pregnancy, transgender issues, the right to die, racism, coercive control, cancer, homosexuality, domestic abuse, child grooming, and suicide, among others. Key storylines have included: Mike and Deirdre's affair (1983), the death of Brian Tilsley (1989), Alan Bradley's abuse of Rita (1989), Kevin and Natalie's affair (1997), Deirdre's wrongful imprisonment for fraud (1998), Sarah Platt's underage pregnancy (2000), Toyah's rape (2001), Alma's cancer (2001), Richard Hillman's serial killer storyline (2002–2003), Peter Barlow's bigamy (2003), Kevin and Molly's affair (2009), Kirsty's abuse of Tyrone (2012), Hayley's cancer (2013), Faye's underage pregnancy (2015), Bethany's grooming (2017), David's rape (2018), Aidan's suicide (2018), Sinead's diagnosis with cervical cancer (2019), Yasmeen's abuse (2020), Daisy's stalking hell (2023), Paul's MND (2023), Liam's bullying and suicidal thoughts (2023), Lauren's disappearance and possible murder (2024), Mason's death from knife crime (2025) and Debbie's dementia (2025). Production. Broadcast format. Between 9 December 1960 and 3 March 1961, "Coronation Street" was broadcast twice weekly, on Wednesday and Friday. During this period, the Friday episode was broadcast live, with the Wednesday episode being pre-recorded 15 minutes later. When the programme went fully networked on 6 March 1961, broadcast days changed to Monday and Wednesday. The last regular episode to be shown live was broadcast on 3 February 1961. The series was transmitted in black and white for the majority of the 1960s. Preparations were made to film episode 923, to be transmitted Wednesday 29 October 1969, in colour. This installment featured the street's residents on a coach trip to the Lake District. In the end, suitable colour film stock for the cameras could not be found and the footage was shot in black and white. The following episode, transmitted Monday 3 November, was videotaped in colour but featured black and white film inserts and title sequence. Like BBC1, the ITV network was officially broadcast in black and white at this point (though programmes were actually broadcast in colour as early as July that year for colour transmission testing and adjustment) so the episode was seen by most in black and white. The ITV network, like BBC1, began full colour transmissions on 15 November 1969. Daran Little, for many years the official programme archivist, claims that the first episode to be transmitted in colour was episode 930 shown on 24 November 1969. In October 1970, a technicians' dispute turned into the Colour Strike when sound staff were denied a pay rise given to camera staff the year before for working with colour recording equipment. The terms of the work-to-rule were that staff refused to work with the new equipment (though the old black and white equipment had been disposed of by then) and therefore programmes were recorded and transmitted in black and white, including "Coronation Street". The dispute was resolved in early 1971 and the last black and white episode was broadcast on 10 February 1971, although the episodes transmitted on 22 and 24 February 1971 had contained black and white location inserts. From 22 March 2010, "Coronation Street" was produced in 1080/50i for transmission on HDTV platforms on ITV HD. The first transmission in this format was episode 7351 on 31 May 2010 with a new set of titles and re-recorded theme tune. On 26 May 2010 ITV previewed the new HD titles on the "Coronation Street" website. Due to copyright reasons only viewers residing in the UK could see them on the ITV site. On 24 January 2022, ITV announced that as part of an overhaul of their evening programming, "Coronation Street" would permanently air as three 60-minute episodes per week from March 2022 onwards. This is set to change again in 2026. It was confirmed that half an hour of content would be dropped from the schedule. Another change was the timeslot, with the soap instead set to air every weekday at 8:30pm for 30 minutes. ITV's Managing Director of Media and Entertainment Kevin Lygo explained: "research insights show us that soap viewers are increasingly looking to the soaps for their pacey storytelling. Streaming-friendly, 30 minute episodes better provide the opportunity to meet viewer expectations for storyline pace, pay-off and resolution." Production staff. "Coronation Street's" creator, Tony Warren, wrote the first 13 episodes of the programme in 1960, and continued to write for the programme intermittently until 1976. He later became a novelist, but retained links with "Coronation Street." Warren died in 2016. Harry Kershaw was the script editor for "Coronation Street" when the programme began in 1960, working alongside Tony Warren. Kershaw was also a script writer for the programme and the show's producer between 1962 and 1971. He remains the only person, along with John Finch, to have held the three posts of script editor, writer and producer. Adele Rose was "Coronation Street"s first female writer and the show's longest-serving writer, completing 455 scripts between 1961 and 1998. She also created "Byker Grove". Rose also won a BAFTA award in 1993 for her work on the show. Bill Podmore was the show's longest serving producer. By the time he stepped down in 1988 he had completed 13 years at the production helm. Nicknamed the "godfather" by the tabloid press, he was renowned for his tough, uncompromising style and was feared by both crew and cast alike. He is known for sacking Peter Adamson, the show's Len Fairclough, in 1983. Iain MacLeod is the current series producer. Michael Apted, known for the "Up!" series of documentaries, was a director on the programme in the early 1960s. This period of his career marked the first of his many collaborations with writer Jack Rosenthal. Rosenthal, noted for such television plays as "Bar Mitzvah Boy", began his career on the show, writing over 150 episodes between 1961 and 1969. Paul Abbott was a story editor on the programme in the 1980s and began writing episodes in 1989, but left in 1993 to produce "Cracker", for which he later wrote, before creating his own dramas such as "Touching Evil" and "Shameless". Russell T Davies was briefly a storyliner on the programme in the mid-1990s, also writing the script for the direct-to-video special "" He, too, has become a noted writer of his own high-profile television drama programmes, including "Queer as Folk" and the 2005 revival of "Doctor Who". Jimmy McGovern also wrote some episodes. Theme music. The show's theme music, a cornet piece, accompanied by a brass band plus clarinet and double bass, reminiscent of northern band music, was written by Eric Spear. The original theme tune was called "Lancashire Blues" and Spear was paid a £6 commission in 1960 to write it. The identity of the trumpeter was not public knowledge until 1994, when jazz musician and journalist Ron Simmonds revealed that it was the Surrey musician Ronnie Hunt. He added, "an attempt was made in later years to re-record that solo, using Stan Roderick, but it sounded too good, and they reverted to the old one." In 2004, the "Manchester Evening News" published a contradictory story that a young musician from Wilmslow called David Browning had played the original version. However, after investigating further, his story was found to be false, Browning not knowing that the original trumpet player Ronnie Hunt was still alive, proving that he was the true and rightful player that performed the solo. With his union pay stubs and contract, Browning was proven false. A new, completely re-recorded version of the theme tune replaced the original when the series started broadcasting in HD on 31 May 2010. It accompanied a new montage-style credits sequence featuring images of Manchester and Weatherfield. A reggae version of the theme tune was recorded by The I-Royals and released by Media Marvels and WEA in 1983. Viewing figures. Episodes in the 1960s, 1970s and 1980s, regularly attracted figures of between 18 and 21 million viewers, and during the 1990s and early 2000s, 14 to 16 million per episode would be typical. Like most terrestrial television in the UK, a decline in viewership has taken place and the show posts an average audience of just under 9 million per episode , remaining one of the highest rated programmes in the UK. "EastEnders" and "Coronation Street" have often competed for the highest rated show. The episode that aired on 2 January 1985, in which Bet Lynch (Julie Goodyear) finds out she has got the job as manager of the Rovers Return, is the highest-rated single episode in the show's history, attracting 21.40 million viewers. The 25 December 1987 episode, where Hilda Ogden (Jean Alexander) leaves the street to start a new life as a housekeeper for long-term employer Dr Lowther, attracted a combined audience of 26.65 million for its original airing and omnibus repeat on 27 December 1987. This is the second-highest combined rating in the show's history. The show attracted its highest-ever combined rating of 26.93 million for the episode that aired on 15 (and 19) March 1989, where Rita Fairclough (Barbara Knox) is in hospital and Alan Bradley (Mark Eden) is hiding from the police after trying to kill Rita in the previous episode. By the 2020s viewing figures dropped due to increased competition from streaming services and satellite channels, with the usually high-rated Christmas episode being viewed by only 2.6 million households in 2023, down from 2.8 million in 2022 and 8 million a decade previously. However, these figures are based on overnight ratings and do not include viewing via ITV's "catch-up" streaming service. Sets. The regular exterior buildings shown in Coronation Street include a row of terrace houses, several townhouses, and communal areas including a newsagents (The Kabin), a café (Roy's Rolls), a general grocery shop (D&S Alahan's), a factory (Underworld) and Rovers Return Inn public house. The Rovers Return Inn is the main meeting place for the show's characters. Between 1960 and 1968, street scenes were transmitted/taped before a set constructed in a studio, with the house fronts reduced in scale to 3/4 and constructed from wood. In 1968 Granada built an outside set not all that different from the interior version previously used, with the wooden façades from the studio simply being erected on the new site. When the show began broadcasting in colour, these were replaced with brick façades, and back yards were added in the 1970s. In 1982, a permanent full-street set was built in the Granada backlot, an area between Quay Street and Liverpool Road in Manchester. The set was constructed from reclaimed Salford brick. The set was updated in 1989 with the construction of a new factory, two shop units and three modern town houses on the south side of the street. Between 1989 and 1999, the Granada Studios Tour allowed members of the public to visit the set. The exterior set was extended and updated in 1999. This update added to the Rosamund Street and Victoria Street façades, and added a viaduct on Rosamund Street. Most interior scenes are shot in the adjoining purpose-built studio. In 2008, Victoria Court, an apartment building full of luxury flats, was started on Victoria Street. In 2014, production moved to a new site at Trafford Wharf, a former dock area about two miles to the east, part of the MediaCityUK complex. The Trafford Wharf backlot is built upon a former truck stop site next to the Imperial War Museum North. It took two years from start to finish to recreate the iconic Street. The houses were built to almost full scale after previously being three-quarter size. On 5 April 2014, the staff began to allow booked public visits to the old Quay Street set. An advert, with a voiceover from Victoria Wood, appeared on TV to advertise the tour. The tour was discontinued in December 2015. On 12 March 2018, the extension of the Victoria Street set was officially unveiled. The new set featured a garden, featuring a memorial bench paying tribute to the 22 victims of the Manchester Arena bombing, including "Coronation Street" "super fan" Martyn Hett. The precinct includes a Greater Manchester Police station called Weatherfield Police station. As part of a product placement deal between three companies and ITV Studios, new additions include a tram stop station which is named Weatherfield North with Transport for Greater Manchester Metrolink branding, while shop front facades of Costa Coffee and the Weatherfield branded Co-op Food store interior scenes have been screened. Exterior scenes at the new set first aired on 20 April 2018. On 20 April 2018, ITV announced that they had been granted official approval of planning permission to allow booked public visits to the MediaCityUK Trafford Wharf set. Tours commenced on weekends from 26 May 2018 onwards. The set was further expanded in March 2022, with the addition of the Weatherfield Precinct, which took six months to build, and was inspired by Salford. The new section of the set included a two-storey construction featuring maisonettes, a staircase and balcony leading to the properties, a piazza and an array of shops and units. Broadcast. United Kingdom. For 60 years, "Coronation Street" has remained at the centre of ITV's prime time schedule. The programme is currently shown in the UK in three hour-long episodes, over three evenings a week on ITV in the 8 pm time slot - Mondays, Wednesdays and Fridays. Additional episodes have been broadcast at other times, such as between 22 and 26 November 2004, when eight episodes were shown including three 10pm outings. These late night episodes allowed for more graphic content when 'Mad' Maya (Maya Sharma) sought her revenge on Dev Alahan and Sunita Alahan. From Friday 9 December 1960 until Friday 3 March 1961, the programme was shown in two episodes broadcast on Wednesday and Friday at 7 pm. Schedules were changed, and from Monday 6 March 1961 until Wednesday 11 October 1989, the programme was shown in two episodes broadcast Monday and Wednesday at 7:30 pm. A third weekly episode was introduced on Friday 20 October 1989, broadcast at 7:30 pm. From 1996, an extra episode was broadcast at 7:30 pm on Sunday nights. Aside from Granada, the programme originally appeared on the following stations of the ITV network: Anglia Television, Associated-Rediffusion, Television Wales and the West, Scottish Television, Southern Television and Ulster Television. From episode 14 on Wednesday 25 January 1961, Tyne Tees Television broadcast the programme. That left ATV in the Midlands as the only ITV station not carrying the show. When they decided to broadcast the programme, national transmission was changed from Wednesday and Friday at 7 pm to Monday and Wednesday at 7:30 pm and the programme became fully networked under this new arrangement from episode 25 on Monday 6 March 1961. As the ITV network grew over the next few years, the programme was transmitted by these new stations on these dates onward: Westward Television from episode 40 on 1 May 1961, Border Television from episode 76 on 4 September 1961, Grampian Television from episode 84 on 2 October 1961, Channel Television from episode 180 on 3 September 1962 and Teledu Cymru (north and west Wales) from episode 184 on 17 September 1962. At this point, the ITV network became complete and the programme was broadcast almost continuously across the country at 7:30 pm on Monday and Wednesday for the next twenty-eight years. From episode 2981 on Friday 20 October 1989 at 7:30 pm, a third weekly episode was introduced and this increased to four episodes a week from episode 4096 on Sunday 24 November 1996, again at 7:30 pm. A second Monday episode was introduced in 2002 and was broadcast at 8:30 pm to usher in the return of Bet Lynch. The Monday 8:30 pm episode was used intermittently during the popular Richard Hillman storyline and became a regular feature from episode 5568 on Monday 25 August 2003. In January 2008, ITV axed the Sunday episode, and instead aired a second episode on Fridays, at 8:30 pm, with the final Sunday episode airing on 6 January 2008, though some episodes thereafter continued to air occasionally on Sundays, usually for when an episode was displaced from one of its regular slots by a live football match. From 23 July 2009 to September 2012 the Wednesday show was replaced with an episode at 8:30 pm on Thursdays. A sixth weekly episode was added on Wednesdays at 8:30 pm from 20 September 2017. In March 2020, it was revealed that episodes that were currently filming for future broadcast (as episodes are filmed a few weeks in advance) during the COVID-19 pandemic would be shown differently. Instead of six episodes a week, only three episodes would be broadcast, airing as normal on a Monday, Wednesday and Friday at the normal timeslot of 7:30 pm. The actions provided would be made effective starting from 30 March. Simultaneously, the announcement also mentioned that the elderly cast of the show would be "written off" due to health advice issued by Public Health England and the NHS. On 22 March, ITV released a statement confirming that filming of both "Coronation Street" and "Emmerdale" was suspended. In June 2020, ITV announced that filming would resume on 9 June. However, due to the new health and safety measures, cast members over the age of 70 or with underlying health conditions did not come back on set, until the production could determine it is safe for them to return. In July 2020, ITV announced that "Coronation Street" would return to the normal output of six episodes a week in September that year. In October 2020, Maureen Lipman and David Neilson made their first appearances since July that year, as all cast members over the age of 70 had temporarily left the series earlier in the year. William Roache, Barbara Knox and Sue Nicholls returned in December. On 22 January 2021, ITV announced that filming would be suspended from 25 January in order to rewrite "stories and scripts as a consequence of the coronavirus pandemic" and to "review all health and safety requirements". ITV also confirmed that this decision would not affect their ability to deliver six episodes a week. In January 2022, it was announced that after 60 years in the 7.30 pm slot, "Coronation Street"s transmission time would move to 8pm due to the "ITV Evening News" having a longer duration, pushing "Emmerdale" into the 7.30 pm slot on weeknights. The double-bill episodes on Mondays, Wednesdays and Fridays have merged into hour-long slots on these days. The new scheduling went live on Monday 7 March 2022. Repeats and classic episodes. Repeat episodes, omnibus broadcasts and specials have been shown on various ITV channels. After several years on ITV2, in January 2008 the omnibus returned to the main ITV channel, where it was aired on Saturday mornings or afternoons, depending on the schedule and times. In May 2008, it moved to Sunday mornings, until August 2008, when it returned to Saturdays. In January 2009, it moved back to Sunday mornings, usually broadcasting at around 9.25am until December 2010. In January 2011, the omnibus moved to Saturday mornings on ITV at 9.25am. During the Rugby World Cup, which took place in New Zealand, matches had to be broadcast on a Saturday morning, so the omnibus moved to Saturday lunchtimes/afternoons during September and October 2011. On 22 October 2011, the omnibus moved back to Saturday mornings at 9.25am on ITV. In January 2012, the omnibus moved to ITV2, and then moved to ITV3 in January 2020. In January 2022, the omnibus moved back to ITV2. Older episodes were broadcast by satellite and cable channel Granada Plus from its launch in 1996. The first episodes shown were from episode 1588 (originally transmitted on Monday 5 April 1976) onwards. Originally listed and promoted as "Classic Coronation Street", the "classic" was dropped in early 2002, at which stage the episodes were from late 1989. By the time of the channel's closure in 2004, the repeats had reached February 1994. In addition to this, "specials" were broadcast on Saturday afternoons in the early years of the channel, with several episodes based on a particular theme or character(s) shown. The last episode shown in these specials was from 1991. In addition, on 27 and 28 December 2003, several Christmas Day editions of the show were broadcast. ITV3 began airing afternoon timeslot sequential reruns of "Classic Coronation Street" from 2 October 2017. Two classic episodes were retransmitted from Mondays to Fridays at 2:40 pm until 3:45 pm, starting from episode 2587 (originally transmitted on Wednesday 15 January 1986) onwards. To mark the 60th anniversary of "Coronation Street", between 7 and 11 December 2020 at 10:00 pm–11:05 pm, ITV3 aired special episodes of the soap including "Episode 1", the tenth anniversary episode from December 1970, two episodes from the twentieth anniversary in December 1980, two episodes from the thirtieth anniversary in December 1990, the "2000 live episode" from the fortieth anniversary in December 2000, and the "fiftieth anniversary episode" which aired after a repeat of "The Road to Coronation Street". On Easter Monday 2022, to commemorate the upcoming 90th birthday of William Roache, eight special "Coronation Street" Ken Barlow episodes were aired on 18 April 2022, at 10:25 am–2:35 pm. The episodes shown were "Episode 1" from December 1960, "Ken and Deirdre Tie the Knot" from July 1981, "Ken's Affair" from December 1989, "Deirdre's Fling" from January 2003, "Steve and Karen's Wedding Shocker" from February 2004, "Ken and Deirdre's Second Wedding" from April 2005, "Ken and Deirdre's Holiday" from August 2014, and" Deirdre's Death" from July 2015. International broadcast. "Coronation Street" is shown in various countries worldwide. YouTube has the first episode and many others available as reruns. The programme was first aired in Australia in 1963 on TCN-9 Sydney, GTV-9 Melbourne and NWS-9 Adelaide, and by 1966 "Coronation Street" was more popular in Australia than in the UK. The show eventually left free-to-air television in Australia. It briefly returned to the Nine Network in a daytime slot during 1994–1995. In 2005, STW-9 Perth began to show episodes before the 6 pm news to improve the lead in to Nine News Perth, but this did not work and the show was cancelled a few months later. In 1996, pay-TV began and Arena began screening the series in one-hour instalments on Saturdays and Sundays at 6:30 pm EST. The series was later moved to pay-TV channel UKTV (now BBC UKTV), where it is still shown. "Coronation Street" is shown Mon-Thu at 7:20 pm EST and a double episode on Fridays, with episodes on the channel being one week behind UK broadcast. In Canada, "Coronation Street" is broadcast on CBC Television. Until 2011, episodes were shown in Canada approximately 10 months after they aired in Britain; however, beginning in the fall of 2011, the CBC began showing two episodes every weekday, in order to catch up with the ITV showings, at 6:30 pm and 7 pm local time Monday-Friday, with an omnibus on Sundays at 7.30am. By May 2014, the CBC was only two weeks behind Britain, so the show was reduced to a single showing weeknights at 6:30 pm local time. The show debuted on Toronto's CBLT in July 1966. The 2002 edition of the "Guinness Book of Records" recognises the 1,144 episodes sold to the now-defunct CBC-owned Saskatoon, Saskatchewan, TV station CBKST by Granada TV on 31 May 1971 to be the largest number of TV shows ever purchased in one transaction. The show traditionally aired on weekday afternoons in Canada, with a Sunday morning omnibus. In 2004, CBC moved the weekday airings from their daytime slot to prime time. In light of austerity measures imposed on the CBC in 2012, which includes further cutbacks on non-Canadian programming, one of the foreign shows to remain on the CBC schedule is "Coronation Street", according to the CBC's director of content planning Christine Wilson, who commented: "Unofficially I can tell you "Coronation Street" is coming back. If it didn't come back, something would happen on Parliament Hill." Kirstine Stewart, the head of the CBC's English-language division, once remarked: ""Coronation Street" fans are the most loyal, except maybe for curling viewers, of all CBC viewers." As of mid 2022, Canada is about three weeks behind the UK and airs six episodes per week. In Ireland, "Coronation Street" is currently shown on Virgin Media One. The show was first aired in 1978, when RTÉ2 began showing episodes from 1976, although Ireland caught up with the current UK episodes in 1983. In 1992 it moved to RTÉ One, but in 2001 Granada TV bought 45 percent of TV3, and so TV3 broadcast the series from 2001 to 2014. In 2006, ITV sold its share of the channel but TV3 continued to buy the soap until the end of 2014 when it moved to UTV Ireland. Coronation Street has broadcast on each of the main Irish networks, except for the Irish language network TG4. In December 2016, "Coronation Street" returned to TV3 (now Virgin Media One). The show is consistently the channel's most viewed programme every week. Two Dutch stations have broadcast "Coronation Street": VARA showed 428 episodes between 1967 and 1975, and SBS6 ran the show for a period starting in 2010. From 2006 the series was also broadcast by Vitaya, a small Flemish Belgian channel. In New Zealand, "Coronation Street" has been shown locally since 1964, first on NZBC television until 1975, and then on TV One, which broadcasts it in a 4-episode/2-hour block on Fridays from 7:30 pm. In September 2014, TV One added a 2-episode/1-hour block on Saturday from 8:30 pm. Because TV One did not upgrade to showing the equivalent of five or six episodes per week, New Zealand continued to fall further and further behind with episodes, and was 23 months behind Britain by March 2014. During the weekday nights of the week ending 11 April 2014 and previous weeks, Coronation Street was the least watched programme on TV One in the 7:30 pm slot by a considerable margin in comparison to other weeknights, The serial aired on Tuesdays and Thursdays at 7:30 pm until October 2011, when the show moved to a 5:30 pm half-hour slot every weekday. The move proved unpopular with fans, and the series was quickly moved into its present prime-time slot within weeks. Episodes 7883, 7884, 7885 and 7886 were screened on 16 May 2014. These were originally aired in the UK between 4 and 11 June 2012. On 10 May 2018 it was announced that the current 2016 episodes would be moved to 1 p.m. Monday-Friday titled 'Catch-up Episodes' and for primetime Wednesday-Friday express episodes would be airing in New Zealand a week behind the United Kingdom titled '2018 Episodes' these changes would be taking place from 11 June 2018. In South Africa, "Coronation Street" episodes were broadcast three days after the UK air date on ITV Choice until the channel ceased broadcasting in June 2020, episodes temporarily went off the air until they moved to M-Net City, starting in October 2020. In the United States, "Coronation Street" is available by broadcast or cable only in northern markets where CBC coverage from Canada overlaps the border or is available on local cable systems. It was broadcast on CBC's US cable channel, Trio until the CBC sold its stake in the channel to Universal, before it was shut down in 2006. Beginning in 2009, episodes were available in the United States through Amazon.com's on-demand service, a month behind their original UK airdates. The final series of shows available from Amazon appears to be from November 2012, as no new episodes have been uploaded. On 15 January 2013, online distributor Hulu began airing episodes of the show, posting a new episode daily, two weeks after their original airdates. For a time, Hulu's website stated: "New episodes of "Coronation Street" will be unavailable as of April 7th, 2016", with the same being said for British soap "Hollyoaks", but Hulu is once again showing new episodes of "Coronation Street" as of April 2017, two weeks behind the UK airdate. The BBC/ITV service Britbox shows new episodes on the same day as the UK airing. "Coronation Street" was also shown on USA Network for an unknown period starting in 1982. HM Forces and their families stationed overseas can watch "Coronation Street" on ITV, carried by the British Forces Broadcasting Service, which is also available to civilians in the Falkland Islands. It used to be shown on BFBS1. Satellite channel ITV Choice showed the programme in Asia, Middle East, Cyprus, and Malta, before the channel ceased broadcasting in 2019. Merchandise. "The Street", a magazine dedicated to the show, was launched in 1989. Edited by Bill Hill, the magazine contained a summary of recent storylines, interviews, articles about classic episodes, and stories that occurred from before 1960. The format was initially A5 size, expanding to A4 from the seventh issue. The magazine folded after issue 23 in 1993 when the publisher's contract with Granada Studios Tour expired and Granada wanted to produce their own magazine. On 25 June 2010, a video game of the show was released on Nintendo DS. The game was developed by Mindscape, and allowed players to complete tasks in the fictitious town of Weatherfield. Discography. In 1995, to commemorate the programme's 35th anniversary, a CD titled "The Coronation Street Album" was released, featuring cover versions of modern songs and standards by contemporary cast members. The album charted a Top 40 hit when "The Coronation Street Single" (a double a-side featuring a cover of Monty Python's "Always Look on the Bright Side of Life" by Bill Waddington – with various cast members on backing vocals – on one side and "Something Stupid" by Johnny Briggs & Amanda Barrie on the other) reached number 35 in the Official UK charts. In 2010, an album featuring songs sung by cast members was released to celebrate 50 years of "Coronation Street". The album is titled "Rogues, Angels, Heroes & Fools", and was later developed into a musical. Spin-offs. Television. Granada launched one spin-off in 1965, "Pardon the Expression", following the story of clothing store manager Leonard Swindley (Arthur Lowe) after he left Weatherfield. Swindley's management experience was tested when he was appointed assistant manager at a fictional department store, Dobson and Hawks. Granada produced two series of the spin-off, which ended in 1966. In 1967, Arthur Lowe returned as Leonard Swindley in "Turn Out the Lights", a short-lived sequel to "Pardon the Expression". It ran for just one series of six episodes before it was cancelled. In 1972, Neville Buswell and Graham Haberfield starred as Ray Langton and Jerry Booth in a pilot for a potential spin-off series called "Rest Assured". Written and produced by H.V. Kershaw the pilot had an episode title of "Lift Off", and featured Fred Feast (later cast as Fred Gee in Coronation Street) as the lift engineer. No series was commissioned. From 1985 to 1988, Granada TV produced a sitcom called "The Brothers McGregor" featuring a pair of half-brothers (one black, one white) who had appeared in a single episode of "Coronation Street" as old friends of Eddie Yeats and guests at his wedding. The original actors were unavailable so the characters were recast with Paul Barber and Philip Whitchurch. The show ran for 26 episodes over four series. In 1985, a sister series, "Albion Market", was launched. It ran for one year, with 100 episodes produced. Crossovers. In 2010, several actors from the show appeared on "The Jeremy Kyle Show" as their soap characters: David Platt (Jack P. Shepherd), Nick Tilsley (Ben Price), Tina McIntyre (Michelle Keegan) and Graeme Proctor (Craig Gazey). In the fictional, semi-improvised scenario, David accused Nick (his brother) and Tina (his ex-girlfriend) of sleeping together. "Coronation Street" and rival soap opera "EastEnders" had a crossover for "Children in Need" in November 2010 called "East Street". "EastEnders" stars that visited Weatherfield include Laurie Brett as Jane Beale, Charlie G. Hawkins as Darren Miller, Kylie Babbington as Jodie Gold, Nina Wadia as Zainab Masood and John Partridge as Christian Clarke. On 21 December 2012, "Coronation Street" produced a Text Santa special entitled "A Christmas Corrie" which featured Norris Cole in the style of Scrooge, being visited by the ghosts of dead characters. The ghosts were Mike Baldwin, Maxine Peacock, Derek Wilton and Vera Duckworth. Other special guests include Torvill and Dean, Lorraine Kelly and Sheila Reid. The episode concluded with Norris learning the error of his ways and dancing on the cobbles. The original plan for this feature was to have included Jack Duckworth, along with Vera, but actor Bill Tarmey died before filming commenced. In the end a recording of his voice was played. Documentaries. "Coronation Street: Family Album" was several documentaries about various families living on the street. "Farewell ..." was several documentaries featuring the best moments of a single character who had recently left the series—most notably, Farewell Mike (Baldwin), Farewell Vera (Duckworth), Farewell Blanche (Hunt), Farewell Jack (Duckworth), Farewell Janice (Battersby), Farewell Liz (McDonald), Farewell Becky (McDonald), and Farewell Tina (McIntyre). Most of these were broadcast on the same day as the character's final scenes in the series. "Stars on the Street" was aired around Christmas 2009. It featured actors from the soap talking about the famous guest stars who had appeared in the series including people who were in it before they were famous. In December 2010, ITV made a few special programmes to mark the 50th anniversary. "Coronation Street Uncovered: Live", hosted by Stephen Mulhern was shown after the episode with the tram crash was aired on ITV2. On 7 and 9 December, a countdown on the greatest Corrie moments, "Coronation Street: 50 Years, 50 Moments", the viewers voted "The Barlows at Alcoholics Anonymous" as the greatest moment. On 10 December Paul O'Grady hosted a quiz show, "Coronation Street: The Big 50" with three teams from the soap and a celebrity team answering questions about Coronation Street and other soaps. Also, "Come Dine with Me" and "Celebrity Juice" aired Coronation Street specials in the anniversary week. International adaptation. The German TV series "Lindenstraße" took "Coronation Street" as the model. "Lindenstraße" started in 1985 and broadcast its final episode on 29 March 2020, after airing for nearly 35 years. Films. Over the years, "Coronation Street" has released several straight-to-video films. Unlike other soaps, which often used straight-to-video films to cover more contentious plot lines that may not be allowed by the broadcaster, "Coronation Street" has largely used these films to reset their characters in other locations. In 1995, "Coronation Street: The Cruise" also known as "Coronation Street: The Feature Length Special" was released on VHS to celebrate the 35th anniversary of the show, featuring Rita Sullivan, Mavis Wilton, Alec Gilroy, Curly Watts and Raquel Watts. ITV heavily promoted the programme as a direct-to-video exclusive, but broadcast a brief version of it on 24 March 1996. The Independent Television Commission investigated the broadcast, as viewers complained that ITV misled them. In 1997, following the controversial cruise spin-off, "Coronation Street: Viva Las Vegas!" was released on VHS, featuring Vera Duckworth, Jack Duckworth, Fiona Middleton and Maxine Peacock on a trip to Las Vegas, which included the temporary return of Ray Langton. In 1999, six special episodes of "Coronation Street" were produced, following the story of Steve McDonald and Vikram Desai in Brighton, which included the temporary returns of Bet Gilroy, Reg Holdsworth and Vicky McDonald. This video was titled "Coronation Street: Open All Hours" and released on VHS. In 2008, ITV announced filming was to get underway for a new special DVD episode, ", featuring Kirk Sutherland, Fiz Brown, Chesney Brown, which included the temporary return of Cilla Battersby-Brown. Sophie Webster, Becky Granger and Tina McIntyre also make brief appearances. In 2009, another DVD special, ", was released. The feature-length comedy drama followed Roy, Hayley and Becky as they travelled to Romania for the wedding of a face from their past. Eddie Windass also briefly appears. The BBC commissioned a one-off drama called "The Road to Coronation Street", about how the series first came into being. Jessie Wallace plays Pat Phoenix (Elsie Tanner) with Lynda Baron as Violet Carson (Ena Sharples), Celia Imrie as Doris Speed (Annie Walker) and James Roache as his own father William Roache (Ken Barlow). It was broadcast on 16 September 2010 on BBC Four. On 1 November 2010, "Coronation Street: A Knight's Tale" was released. Reg Holdsworth and Curly Watts returned in the film. Mary tries to take Norris to an apparently haunted castle where she hoped to seduce him. Rosie gets a job there and she takes Jason with her. Brian Capron also guest starred as an assumed relative of Richard Hillman. He rises out of a lake with a comedic "wink to the audience" after Hillman drowned in 2003. Rita Sullivan also briefly appears. Online. On 21 December 2008, a web-based miniseries ran on ITV.com; called "Corrie Confidential"; the first episode featured the characters Rosie and Sophie Webster in "Underworld". ITV.com launched a small spin-off drama series called 'Gary's Army Diaries' which revolves around Gary Windass's experiences in Afghanistan and the loss of his best friend, Quinny. Due to their popularity, the three five-minute episodes were recut into a single 30-minute episode, which was broadcast on ITV2. William Roache and Anne Kirkbride starred as Ken and Deirdre in a series of ten three-minute internet 'webisodes'. The first episode of the series titled, "Ken and Deirdre's Bedtime Stories" was activated on Valentine's Day 2011. In 2011, an internet based spin-off starring Helen Flanagan as Rosie Webster followed her on her quest to be a supermodel called "Just Rosie". On 3 February 2014, another web-based miniseries ran on ITV.com; called "Streetcar Stories". It showed what Steve and Lloyd get up to during the late nights in their Streetcar cab office. The first episode shows Steve and Lloyd making a cup of tea with "The Stripper" playing in the background, referencing Morecambe and Wise's Breakfast Sketch. The second episode involves the pair having a biscuit dunking competition. During the 'Who Attacked Ken' storyline, a mini series of police files was run on the official Coronation Street YouTube channel. They outlined the suspects' details and possible motives. Stage. In August 2010, many "Coronation Street" characters were brought to the stage in Jonathan Harvey's comedy play "Corrie!". The play was commissioned to celebrate the 50th Anniversary of the TV series and was presented at The Lowry in Salford, England by ITV Studios and Phil McIntyre Entertainments. Featuring a cast of six actors who alternate roles of favourite characters including Ena Sharples, Hilda Ogden, Hayley and Roy, Richard Hillman, Jack and Vera, Bet Lynch, Steve, Karen and Becky, the play weaves together some of the most memorable moments from the TV show. It toured UK theatres between February 2011 and July 2011 with guest star narrators including Roy Barraclough, Ken Morley and Gaynor Faye. In popular culture. The British rock band Queen produced a single "I Want to Break Free" in 1984 that reached number 3 in the UK Singles Chart. The song is memorable for its music video in which the band members dressed in women's clothing, which parodied characters in Coronation Street and is considered an homage to the show. The video depicts Freddie Mercury as a housewife, loosely based on Bet Lynch, who wants to "break free" from his life. Although Lynch was a blonde in the soap opera, Mercury thought he would look too silly as a blonde and chose a dark wig. Guitarist Brian May plays another, more relaxed housewife based on Hilda Ogden. In December 2022, the American singer Bob Dylan was offered a cameo on Coronation Street after revealing to "The Wall Street Journal" that he is a fan of the ITV soap. Sponsorship. Cadbury was the first sponsor of "Coronation Street", beginning in July 1996. In the summer of 2006, Cadbury Trebor Bassetts had to recall over one million chocolate bars, due to suspected salmonella contamination, and "Coronation Street" stopped the sponsorship for several months. In 2006, Cadbury did not renew their contract, but agreed to sponsor the show until "Coronation Street" found a new sponsor. Harveys then sponsored "Coronation Street" from 30 September 2007 until December 2012. In the "Coronation Street: Romanian Holiday" film, Roy and Hayley Cropper are filmed in front of a Harveys store, and in "Coronation Street: A Knights Tale", a Harveys truck can be seen driving past Mary Taylor's motorhome. Compare The Market took over as sponsor from 26 November 2012 until 30 November 2020. On 10 December 2020, it was announced that Argos would be the new sponsor of "Coronation Street", starting on 1 January 2021. In November 2011, a Nationwide Building Society ATM in Dev Alahan's corner shop became the first use of paid-for product placement in a UK primetime show. In 2018, the shop fronts of Co-Op and Costa Coffee were added to the sets, along with characters using shopping bags with the respective logos on as props. Hyundai have been the sponsor since January 2015 in the Republic of Ireland, aired on Virgin Media One.
6852
5022239
https://en.wikipedia.org/wiki?curid=6852
Caligula
Gaius Caesar Augustus Germanicus (31 August 12 – 24 January 41), also called Gaius and Caligula (), was Roman emperor from AD 37 until his assassination in 41. He was the son of the Roman general Germanicus and Augustus' granddaughter Agrippina the Elder, members of the first ruling family of the Roman Empire. He was born two years before Tiberius became emperor. Gaius accompanied his father, mother and siblings on campaign in Germania, at little more than four or five years old. He had been named after Gaius Julius Caesar, but his father's soldiers affectionately nicknamed him "Caligula" ('little boot'). Germanicus died in Antioch in 19, and Agrippina returned with her six children to Rome, where she became entangled in a bitter feud with Emperor Tiberius, who was Germanicus' biological uncle and adoptive father. The conflict eventually led to the destruction of her family, with Caligula as the sole male survivor. In 26, Tiberius withdrew from public life to the island of Capri, and in 31, Caligula joined him there. Tiberius died in 37, and Caligula succeeded him as emperor, at the age of 24. Of the few surviving sources about Caligula and his four-year reign, most were written by members of the nobility and senate, long after the events they purport to describe. For the early part of his reign, he is said to have been "good, generous, fair and community-spirited" but increasingly self-indulgent, cruel, sadistic, extravagant and sexually perverted thereafter, an insane, murderous tyrant who demanded and received worship as a living god, humiliated the Senate, and planned to make his horse a consul. Most modern commentaries instead seek to explain Caligula's position, personality and historical context. Some historians dismiss many of the allegations against him as misunderstandings, exaggeration, mockery or malicious fantasy. During his brief reign, Caligula worked to increase the unconstrained personal power of the emperor, as opposed to countervailing powers within the principate. He directed much of his attention to ambitious construction projects and public works to benefit Rome's ordinary citizens, including racetracks, theatres, amphitheatres, and improvements to roads and ports. He began the construction of two aqueducts in Rome: the Aqua Claudia and the Anio Novus. During his reign, the empire annexed the client kingdom of Mauretania as a province. He had to abandon an attempted invasion of Britain, and the installation of his statue in the Temple in Jerusalem. In early 41, Caligula was assassinated as a result of a conspiracy by officers of the Praetorian Guard, senators, and courtiers. At least some of the conspirators might have planned this as an opportunity to restore the Roman Republic and aristocratic privileges. If so, their plan was thwarted by the Praetorians, who seem to have spontaneously chosen Caligula's uncle Claudius as the next emperor. Caligula's death marked the official end of the Julii Caesares in the male line, though the Julio-Claudian dynasty continued to rule until the demise of Caligula's nephew, the Emperor Nero. Early life. Caligula was born in Antium on 31 August AD 12, the third of six surviving children of Germanicus and his wife and second cousin, Agrippina the Elder. Germanicus was a grandson of Mark Antony, and Agrippina was the daughter of Marcus Vipsanius Agrippa and Julia the Elder, making her the granddaughter of Augustus. The future emperor Claudius was Caligula's paternal uncle. Caligula had two older brothers, Nero and Drusus, and three younger sisters, Agrippina the Younger, Julia Drusilla and Julia Livilla. At the age of two or three, he accompanied his father, Germanicus, on campaigns in the north of Germania. He wore a miniature soldier's outfit devised by his mother to please the troops, including army boots ("caligae") and armour. The soldiers nicknamed him "Caligula" ("little boot"). Winterling believes he would have enjoyed the attention of the soldiers, to whom he was something of a mascot, though he later grew to dislike the nickname. Germanicus was a respected, immensely popular figure among his troops and Roman civilians of every class, and was widely expected to eventually succeed his uncle Tiberius as emperor. For his successful northern campaigns, he was awarded the great honour of a triumph. During the triumphal procession through Rome, Caligula and his siblings shared their father's chariot, and the applause of the populace. A few months later, Germanicus was despatched to tour Rome's allies and provinces with his family. They were received with great honour; at Assos Caligula gave a public speech, aged only 6. Somewhere "en route", Germanicus contracted what proved to be a fatal illness. He lingered awhile, and died at Antioch, Syria, in AD 19, aged 33, convinced that he had been poisoned by the provincial governor, Gnaius Calpurnius Piso. Many believed that he had been killed at the behest of Tiberius, as a potential rival. Germanicus was cremated, and his ashes were taken to Rome, escorted by his wife and children, Pretorian guards, civilian mourners and senators, then placed in the Mausoleum of Augustus. Caligula lived with his mother Agrippina in Rome, in a milieu very different from that of his earlier years. Agrippina made no secret of her imperial ambitions for herself and her sons, and in consequence, her relations with Tiberius rapidly deteriorated. Tiberius believed himself under constant threat from treason, conspiracy and political rivalry. He forbade Agrippina to remarry, for fear that a remarriage would serve her personal ambition, and introduce yet another threat to himself. The last years of his principate were dominated by treason trials, whose outcomes were determined by senatorial vote. Agrippina, and Caligula's brother Nero, were tried and banished in the year 29 on charges of treason. The adolescent Caligula was sent to live with his great-grandmother (Tiberius' mother), Livia. After her death two years later, he was sent to live with his grandmother Antonia Minor. In the year 30, Tiberius had Caligula's brothers, Drusus and Nero, declared public enemies by the Senate, and exiled. Caligula and his three sisters remained in Italy as hostages of Tiberius, kept under close watch. Capri. In 31, Caligula's brother Nero died in exile. Caligula was remanded to the personal care of Tiberius at Villa Jovis on Capri. He was befriended by Tiberius' Praetorian prefect, Naevius Sutorius Macro. Macro had been active in the downfall of Sejanus, his ambitious and manipulative predecessor in office, and was a trusted communicant between the emperor, and his senate in Rome. Philo, Jewish diplomat and later witness to several events in Caligula's court, writes that Macro protected and supported Caligula, allaying any suspicions Tiberius might harbour concerning his young ward's ambitions. Macro represented Caligula to Tiberius as "friendly, obedient" and devoted to Tiberius' grandson, Tiberius Gemellus, who was seven years younger than himself. Caligula is described during this time as a first-rate orator, well-informed, cultured and intelligent, a natural actor who recognized the danger he was in, and hid his resentment of Tiberius' maltreatment of himself and his family behind such an obsequious manner that it was said of him that there had never been "a better slave or a worse master". Caligula's failure to protest the destruction of his family is taken by Tacitus as evidence that his "monstrous character was masked by a hypocritical modesty". Winterling observes that a forthright protest would "certainly have cost him his life". In 33, Caligula's mother and his brother Drusus died, while still in exile. In the same year, Tiberius arranged the marriage of Caligula and Junia Claudilla, daughter of one of Tiberius' most influential allies in the Senate, Marcus Junius Silanus. Caligula was given an honorary quaestorship in the "", a series of political promotions that could lead to consulship. He would hold this very junior senatorial post until his sudden nomination as emperor. Junia died in childbirth the following year, along with her baby. In 35, Tiberius named Caligula as joint heir with Tiberius' grandson, Gemellus, who was Caligula's junior by seven years and not yet an adult. At the time, Tiberius seemed to be in good health, and likely to survive until Gemellus' majority. In Philo's account, Tiberius was genuinely fond of Gemellus, but doubted his personal capacity to rule and feared for his safety should Caligula come to power. Suetonius claims that Tiberius, ever mistrustful but still shrewd in his mid-70s, saw through Caligula's apparent self-possession to an underlying "erratic and unreliable" temperament, not one to be trusted in government; and he claims that Caligula took pleasure in cruelty, torture, and sexual vice of every kind. Tiberius is said to have indulged the young man's appetite for theatre, dance and singing, in the hope that this would help soften his otherwise savage nature; "he used to say now and then that to allow Gaius to live would prove the ruin of himself and of all men, and that he was rearing a viper for the Roman people and a Phaethon for the world." Winterling points out that this judgment draws on later, not particularly accurate accounts of Caligula's rule; Suetonius credits Tiberius with a knowledge of human nature which in reality was not only foreign to him, but famously unsound. At Capri, Caligula learned to dissimulate. He probably owed his life to that and, as all the ancient sources agree, to Macro. Many believed, or claimed to believe, that given a little more time, Tiberius would have eliminated Caligula as a possible successor, but died before this could be done. Emperor. Early reign. Tiberius died on 16 March AD 37, a day before the Liberalia festival. He was 77 years old. Suetonius, Tacitus and Cassius Dio repeat variously elaborated rumours which held that Caligula, perhaps with Macro, was directly responsible for his death. Philo and Josephus, the latter a Romano-Jewish writer who served Vespasian a generation later, describe Tiberius' death as natural. On the same day, Caligula was hailed as emperor by members of the Praetorian guard at Misenum. His leadership of the "domus Caesaris" ("Caesar's household") as its sole heir and pater familias was ratified by the senate, who acclaimed him "imperator" two days after the death of Tiberius. Caligula entered Rome on 28 or 29 March, and with the consensus of "the three orders" (senate, equestrians and common citizens) the Senate conferred on him the "right and power to decide on all affairs". "Princeps". In a single day, and with a single piece of legislation, the 25-year-old Caligula, previously a virtual unknown in Rome's political life, and with no military service, was thus granted the same trappings, authority and powers that Augustus had accumulated piecemeal, over a lifetime and sometimes reluctantly. Until his first formal meeting with the Senate, Caligula refrained from using the titles they had granted him. His studied deference must have gone some way to reassure the more astute that he should prove amenable to their guidance. Some must have resented the political manipulations that led to this extraordinary settlement. Caligula was now entitled to make, break or ignore any laws he chose. Augustus had shown, and Tiberius had failed to realise, that the roles of "primus inter pares" ("first among equals") and "princeps legibus solutus" ("a princeps not bound by the laws") required the exercise of personal responsibility, self-restraint, and above all, tact; as if the Senate still held the power they had voluntarily surrendered. In the words of scholar Anthony A. Barrett, "Caligula would be restrained only by his own sense of discretion, which became in lamentably short supply as his reign progressed". Caligula dutifully asked the Senate to approve divine honours for his predecessor but was turned down, in line with senatorial and popular opinion regarding the dead emperor's worth. Caligula did not push the issue; he had made the necessary gesture of filial respect. Tiberius' will named two heirs, Caligula and Gemellus, but the latter was still a minor, and could not hold any kind of office. The will was annulled with the standard justification that Tiberius must have been insane when he composed it, incapable of good judgment. Although Tiberius' will had been legally set aside, Caligula honoured many of its terms, and in some cases, improved on them. Tiberius had provided each praetorian guardsman with a generous gratitude payment of 500 sesterces. Caligula doubled this, and took credit for its payment as an act of personal generosity; he also paid bonuses to the city troops and the army outside Italy. Every citizen in Rome was given 150 sesterces, and heads of households twice that amount. Building projects on the Palatine hill and elsewhere were also announced, which would have been the largest of these expenditures. Thanks to Macro's preparations on his behalf, Caligula's accession was a "brilliantly stage-managed affair". The legions had already sworn loyalty to Caligula as their imperator. Now Caligula gave the miserly Tiberius a magnificent funeral at public expense, and a tearful eulogy, and met with an ecstatic popular reception along the funeral route and in Rome itself. Among Caligula's first acts as emperor was the provision of public games on a grand scale. Philo describes Caligula in these early days as universally admired. Suetonius writes that Caligula was loved by many, for being the beloved son of the popular Germanicus. Three months of public rejoicing ushered in the new reign. Philo describes the first seven months of Caligula's reign as a "Golden Age" of happiness and prosperity. Josephus claims that in the first two years of his reign, Caligula's "high-minded... even-handed" rule earned him goodwill throughout the Empire. Caligula took up his first consulship on 1 July, two months after his succession. He accepted all titles and honours offered him except "pater patriae" ("father of the fatherland"), which had been conferred on Augustus. Caligula refused it, protesting his youth, until 21 September 37. He commemorated his own father, Germanicus, with portraits on coinage, adopted his name, and renamed the month of September after him. He granted his sisters and his grandmother Antonia Minor extraordinary privileges, normally reserved for the Vestals, and female priesthoods of the deified Augustus; their powers were entirely ceremonial, not executive, but their names were included in the standard formulas used in the senate house to invoke divine blessings on debates and proceedings, and the annual prayers for the safety of emperor and state. Caligula named his favourite sister, Drusilla, as heir to his "imperium". Oaths were sworn in the name of Caligula, and his entire family. One of his sesterces not only identifies each sister by name, but associates her with a particular imperial virtue; "security", "concord" or "fortune". Caligula ordered that an image of his deceased mother, Agrippina, must accompany all festival processions. He made his uncle Claudius his consular colleague, tasked with siting statues of Caligula's two dead brothers, and occasionally standing in for Caligula at games, feasts and ceremonies. Claudius' own family found his limp and stammer "something of a public embarrassment"; he mismanaged the statue commission and his first consulship ended soon after, alongside Caligula's but his appointment elevated him from mere equestrian to senator, and eligible for consulship. Barrett and Yardley describe Claudius' consulship as an "astonishingly enlightened gesture" on Caligula's part, not one of Caligula's attempts to court popularity, as Suetonius would have it. Caligula made a public show of burning Tiberius' secret papers, which gave details of his infamous treason trials. They included accusations of villainy and betrayal against various senators, many of whom had willingly assisted in prosecutions of their own number to gain financial advantage, imperial favour, or to divert suspicion away from themselves; any expression of dissatisfaction with the emperor's rule or decisions could be taken as undermining the State, and lead to prosecution for "maiestas" (treason). Caligula claimed – falsely, as it later turned out – that he had read none of these documents before burning them. He used a coin issue to advertise his claim that he had restored the security of the laws, which had suffered during Tiberius' prolonged absence from Rome; he reduced a backlog of court cases in Rome by adding more jurors and suspending the requirement that sentences be confirmed by imperial office. Stressing his descent from Augustus, Caligula retrieved the remains of his mother and brothers from their places of exile for interment in the Mausoleum of Augustus. Caligula began work on a temple to Livia, widow of Augustus; she held the honorific title of Augusta while still living, and when she died was eventually made a "diva" (goddess) of the Roman state under Claudius. The temple had been vowed in her lifetime, but not constructed. Illness and recovery. Between approximately mid-October and mid-November 37, Caligula fell seriously ill through unknown causes and hovered for a month or so between life and death. Rome's public places filled with citizens who implored the gods for his recovery, some even offering their own lives in exchange. By late October, their emperor had recovered, and embarked on what might have been a purge of suspected opponents or conspirators. Caligula's relations with his senate had been congenial but were now sullied by the forced suicide, for reasons unknown, of the eminent senator Silanus, formerly Caligula's father-in-law. Gemellus, Caligula's adopted son and heir, now 18 years old and legally adult, was also disposed of. Suetonius offers several versions of Gemellus' death. In one, Gemellus was given the adult "toga virilis" then charged with having taken an antidote, "implicitly accusing Caligula of wanting to poison him", and forced to kill himself. Several months later, in early 38, Caligula forced suicide on his Praetorian Prefect, Macro, without whose help and protection he would not have survived, let alone gained the throne as sole ruler. Any link between the deaths is speculative, but it is possible that Silanus had conspired to make Gemellus emperor, should Caligula fail to recover; and Caligula might simply have tired of Macro's control and influence. In 38, Caligula nominated Marcus Aemilius Lepidus as his heir, and married him to his beloved sister Drusilla, but on 19 June that year, Drusilla died. She was deified and renamed Panthea ("All Goddesses"); the first mortal woman in Roman history to be made a "diva" (goddess of state). Caligula, bereft, declared a period of compulsory, universal mourning. Drusilla's death is one of several events approximate to the time of Caligula's illness, besides the death of Antonia and any unreported effects of the illness itself, thought by some to contribute to a fundamental change in Caligula's attitudes. Purges so early in Caligula's reign suggest to Weidemann that "the new emperor had learnt a great deal from Tiberius" and "that attempts to divide his reign into a 'good' beginning followed by unremitting atrocities [...] are misplaced". Public profile. Caligula shared many of the popular passions and enthusiasms of the lower classes and young aristocrats: public spectacles, particularly gladiator contests, chariot and horse racing, the theatre and gambling, but all on a scale which the nobility could not match. He trained with professional gladiators and staged exceptionally lavish gladiator games, being granted exemption by the senate from the sumptuary laws that limited the number of gladiators to be kept in Rome. He was openly and vocally partisan in his uninhibited support or disapproval of particular charioteers, racing teams, gladiators and actors, shouting encouragement or scorn, sometimes singing along with paid performers or declaiming the actors' lines, and generally behaving as "one of the crowd". In gladiator contests, he supported the "parmularius" type, who fought using small, round shields. In chariot races, he supported the Greens, and personally drove his favourite racehorse, Incitatus ("Speedy") as a member of the Green faction. Most of Rome's aristocracy would have found this an unprecedented, unacceptable indignity for any of their number, let alone their emperor. Caligula showed little respect for distinctions of rank, status or privilege among the senate, whose members Tiberius had once described as "men ready to be slaves". Among those whom Caligula recalled from exile were actors and other public performers who had somehow caused Tiberius offence. Caligula seems to have built a loyal following among his own loyal freedmen, citizen-commoners, disreputable public performers on whom he lavished money and other gifts; and the lower nobility (equestrians) rather than the senators and nobles whom he clearly and openly mistrusted, despised and humiliated for their insincere simulations of loyalty. Dio notes, with approval, that Caligula allowed some equestrians senatorial honours, anticipating their later promotion to senator based on their personal merits. To reverse declining membership of the equestrian order, Caligula recruited new, wealthy members empire-wide, and scrupulously vetted the order's membership lists for signs of dishonesty or scandal. He seems to have ignored trivial misdemeanours, and would have anticipated the creation of "new men" ("novi homines"), first of their families to serve as senators. They would owe him a debt of gratitude and loyalty for their advancement. Barrett describes some of the supposed equestrian offences punished by Caligula as "decidedly trivial", and their punishments as sensationalist. Dio claims that Caligula had more than 26 equestrians executed in a circus "fracas"; in Suetonius' biography "more than 20" lives were lost in what is almost certainly the same event, described as a violent but accidental crush. Some sources claim that Caligula forced equestrians and senators to fight in the arena as gladiators. Condemnation to the gladiator arena as a combatant was a standard punishment, doubling as public entertainment, for non-citizens found guilty of certain offences. Laws of AD 19 by Augustus and Tiberius banned voluntary participation of the elite in any public spectacles, but the ban was never particularly effective, and was broadly ignored in Caligula's reign. During Caligula's illness two citizens, one of whom was an equestrian, offered to fight as gladiators if only the gods would spare the emperor's life. The offers were insincere, intended to flatter and invite reward. When Caligula recovered, he insisted that they be taken at face value, to avoid accusations of perjury: "cynical, but not without wit of a kind". Public reform and finance. In 38, Caligula lifted censorship, and published accounts of public funds and expenditure. Suetonius congratulates this as the first such act by any emperor. Very soon after his succession, he restored the right of the popular assembly (comitia) to elect magistrates on behalf of the common citizenry, a right that had been taken over by the Senate under Tiberius and Augustus. The aediles, elected officials who managed public games and festivals, and maintained the fabric of roads and shrines, would now have incentive to spend their own money on lavish, high-profile spectacles and other "munera" (gifts to the state or people), to win the popular vote. Dio writes that this, "though delighting the rabble, grieved the sensible, who stopped to reflect, that if the offices should fall once more into the hands of the many... many disasters would result". When the Senate outright refused to accept this, Caligula restored control of elections to them. Either way, the emperor ultimately chose which candidates stood for election, and which were elected. Caligula was quite capable of recognising his own plans and decisions as flawed, and abandoning, revising or reversing them when faced with opposition. He was open to good advice, but could just as easily take its offering as an insult to his youth or understanding – Philo quotes his warning "Who dares teach me?" Caligula abandoned his plan to convert the Temple of Jerusalem to a temple of the Imperial cult, with a statue of himself as Zeus, when warned that the plan would arouse extreme protests, and injure the local economy. He gave funds where they were needed; he helped those who lost property in fires, and abolished a deeply unpopular tax on sales, but whether his extravagant gifts to favourites during his earliest reign – be they actors, charioteers or other public performers – drew on his personal wealth or state coffers is not known. Personal generosity and magnanimity, coupled with discretion and responsibility, were expected of the ruling elite, and the emperor in particular. At some time, Caligula ruled that bequests to office-holders remain property of the office, not of the office-holder. Tax and treasury. Suetonius claims that Caligula squandered 2.7 billion sesterces in his first year and addressed the consequent treasury deficit by confiscating the estates of wealthy individuals, after false accusations, fines or outright seizure, even the death penalty, as a means of raising money. This seems to have started in earnest around the time of Caligula's confrontation with the senate (in early 39). Suetonius's retrospective balance sheet overlooks what would have been owed to Caligula, personally and in his capacity as emperor, on Tiberius' death, and the release of the former emperor's hoarded wealth into the economy at large. Caligula's inheritance included the deceased empress Livia's vast bequest, which Caligula distributed among its nominated public, private and religious beneficiaries. Barrett in "Caligula: The Abuse of Power" asserts that this "massive cash injection would have given the Roman economy a tremendous boost". Dio remarks the beginnings of a financial crisis in 39, and connects it to the cost of Caligula's extravagant bridge-building project at Baiae. Suetonius has presumably the same financial crisis starting in 38; he does not mention a bridge but lists a broad range of Caligula's extravagances, said to have exhausted the state treasury. To Wilkinson, Caligula's uninterrupted use of precious metals in coin issues does not suggest a bankrupt treasury, though there must have been a blurring of boundaries between Caligula's personal wealth, and his income as head of state. Caligula's immediate successor, Claudius, abolished taxes, embarked on various costly building projects and donated 15,000 sesterces to each Praetorian Guard in 41 as his own reign began, which suggests that Caligula had left him a solvent treasury. In the long term, the occasional windfall aside, Caligula's spending exceeded his income. Fund-raising through taxation became a major preoccupation. Provincial citizens were liable for direct payment of taxes used to fund the military, a payment from which Italians were exempt. Caligula abolished some taxes, including the deeply unpopular sales tax, but he introduced an unprecedented range of new ones, and rather than employ professional tax farmers (publicani) in their collection, he made this a duty of the notoriously forceful Praetorian Guard. Dio and Suetonius describe these taxes as "shameful": some were remarkably petty. Caligula taxed "taverns, artisans, slaves and the hiring of slaves", edibles sold in the city, litigation anywhere in the Empire, weddings or marriages, the wages of porters "or perhaps couriers", and most infamously, a tax on prostitutes (active, retired or married) or their pimps, liable for "a sum equivalent to a single transaction". Citizens of provincial Italy lost their previous tax exemptions. Most individual tax bills were fairly small but cumulative; over Caligula's brief reign, taxes were doubled overall. Even then, the revenue was nowhere near enough, and the imposition was deeply resented by Rome's commoners. Josephus claims that this led to riotous protests at the Circus. Barrett remarks that stories of consequent "mass executions" there by the military should "almost certainly" be dismissed as "standard exaggeration". Property or money left to Tiberius as emperor but not collected on his death would have passed to Caligula as office-holder. Roman inheritance law recognised a legator's obligation to provide for his family; Caligula seems to have considered his fatherly duties to the state entitled him to a share of every will from pious subjects. The army was not exempt; centurions who left nothing or too little to the emperor could be judged guilty of ingratitude, and have their wills set aside. Centurions who had acquired property by plunder were forced to turn over their spoils to the state. Stories of a brothel in the Imperial palace, staffed by Roman aristocrats, matrons and their children, are taken literally by Suetonius and Dio; McGinn believes they could be based on a single incident, extended to an institution in the telling. Similar allegations would be made in the future against Commodus and Elagabalus. Winterling, citing Dio 59.28.9, traces the outline of the story to Cassius Dio's account for AD 40, and his allegation that the noble tenants of newly built suites of rooms at the palace were compelled to pay exorbitant rents for the privilege of living so close to Caligula, and under the protection of the praetorians. No brothel is mentioned in this account. Suetonius appears to reverse the traditional aristocratic client-patron ceremonies of mutual obligation, and have Caligula accepting payments for maintenance from his loyal consular "friends" at morning salutations, evening banquets, and bequest announcements. The sheer numbers of "friends" involved meant that meticulous records were kept of who had paid, how much, and who still owed. His agents would then visit the very same consuls who had been involved in conspiracies against him, rail against the Senate's treachery "en masse" but ask for "gifts" from individuals to express their loyal friendship in return. A refusal was unthinkable. Winterling describes the families who occupied these rooms as hostage, under the supervision of the Praetorians; some paid up willingly, some reluctantly, but all paid. Caligula made loans available at high interest to those who lacked the necessary funds, to complete the humiliation of Rome's elite, especially the old Republican families. Despite his biographers' attempts to ridicule Caligula's taxes, many were continued after his death. The military remained responsible for all tax collection, and the tax on prostitution continued up to the reign of Severus Alexander. Caligula's ruling that bequests made to any reigning emperor became property of his office, not himself as a private individual, was made constitutional under Antoninus Pius. Coinage. Caligula did not change the structure of the monetary system established by Augustus and continued by Tiberius, but the contents of his coinage differed from theirs. The location of the imperial mint for the coins of precious metals (gold and silver) is a matter of debate among ancient numismatists. It seems that Caligula initially produced his precious coins from Lugdunum (now Lyon, France), like his predecessors, then moved the mint to Rome in 37–38, although it is possible that this move occurred later, under Nero. His base metal coinage was struck in Rome. Unlike Tiberius, whose coins remained almost unchanged throughout his reign, Caligula used a variety of types, mostly featuring Divus Augustus, as well as his parents Germanicus and Agrippina, his dead brothers Nero and Drusus, and his three sisters Agrippina, Drusilla, and Livilla. The reason for the extensive emphasis on his relatives was to highlight Caligula's double claim to the Principate, from both the Julian and Claudian sides of the dynasty, and to call for the unity of the family. The sesterce with his three sisters was discontinued after 39, due to Caligula's suspicion regarding their loyalty. He also made a sesterce celebrating the Praetorian cohorts as a mean to give them the bequest of Tiberius at the beginning of his reign. Caligula minted a quadrans, a small bronze coin, to mark the abolition of the "ducentesima", a 0.5% tax on sales. The output of the precious metal mints was small and his sesterces were mostly made in limited quantities, which make his coins now very rare. This rarity cannot be attributed to Caligula's alleged "damnatio memoriae" reported by Dio, as removing his coins from circulation would have been impossible; besides, Mark Antony's coins continued to circulate for two centuries after his death. Caligula's common coins are base metal types with Vesta, Germanicus, and Agrippina the Elder, and the most common is an as with his grandfather Agrippa. Finally, Caligula kept open the mint at Caesarea in Cappadocia, which had been created by Tiberius, in order to pay military expenses in the province with silver drachmae. Numismatists Harold Mattingly and Edward Sydenham consider that the artistic style of Caligula's coins is below those of Tiberius and Claudius; they especially criticize the portraits, which are too hard and lack details. Construction. Caligula had a fondness for grandiose, costly building projects, many of which were intended to benefit or entertain the general population but are described in Roman sources as wasteful. In the city of Rome, he completed the temple of Augustus and the reconstruction of the theatre of Pompey. He is said to have built a bridge between the temple of Castor and Pollux and the Capitol. Barrett (2015) believes that this bridge existed only in Suetonius' account, and should perhaps be dismissed as a fantasy, with possible origins in some jocular remark by Caligula. Caligula began an amphitheatre beside the Saepta Julia; he cleared the latter space for use as an arena, and filled it with water for a single naumachia (a sham naval battle fought as entertainment). He supervised the extension and rebuilding of the imperial palace to include a gallery for his art collection. Philo and his party were given a tour of the gallery during their diplomatic visit. Barrett (2015) considers Philo's description of Caligula as a "would-be connoisseur and aesthete" as "probably not very wide of the mark." To help meet Rome's burgeoning demand for fresh water, he began the construction of aqueducts Aqua Claudia and Anio Novus, which Pliny the Elder considered to be engineering marvels. He built a large racetrack, now known as the Circus of Gaius and Nero. In its central spine he incorporated an Egyptian obelisk, now known as the Vatican obelisk, which he had brought by sea on a gigantic, purpose-built ship, which used 120,000 modi of lentils as ballast. At Syracuse, he repaired the city walls and temples. He pushed to keep roads in good condition throughout the empire, and extended the existing network: to this end, Caligula investigated the financial affairs of current and past highway commissioners. Those guilty of negligence, embezzlement or misuse of funds were forced to repay what they had dishonestly used for other purposes, or fulfil their commissions at their own expense. Caligula planned to rebuild the palace of Polycrates at Samos, to finish the temple of Didymaean Apollo at Ephesus, and house his own cult and image there: and to found a city high up in the Alps. He intended to dig a canal through the Isthmus of Corinth in Greece and sent a chief centurion to survey the site. None of these plans came to fruition. Treason trials. In the course of 39, Caligula's increasingly tense relationship with his Senate deteriorated into outright hostility and confrontation. This is one of Dio's more confusing accounts, involving conspiracies, denunciations and trials for treason ("maiestas"), following Caligula's launch of invective at the entire senate, reviewing and condemning their current and past behaviour. He accused them of servility, treachery and hypocrisy in voting honours to Tiberius and Sejanus while they lived, and rescinding those honours once their recipients were safely dead. He declared that it would be folly to seek the love or approval of such men: they hated him, and wanted him dead, so it would be better that they should fear him. Caligula's diatribes exposed the idealised "princeps" or First Senator as illusion and imposture. When the senate returned next day, they seemed to confirm his suspicions, and voted him a special guard of armed pretorians to protect him and guard his statues. Apparently seeking to please him and assure his safety, the Senate proposed that his senatorial chair be raised "on a high platform even in the very Senate house". They offered a thanksgiving to Caligula, as to a monarch, expressing gratitude for allowing them to live when others had died. Winterling suggests that Caligula's three subsequent consulships, sworn at the Rostra, were vain attempts to make amends, public statements of respect for the senators as his equals. Barrett perceives these later consulships as symbolic of Caligula's continued intention to dominate the senate and the state; Barrett describes the change in Caligula's rule as a gradual unravelling, a "descent into serious mismanagement and impenetrable mistrust" – and, latterly, into "arbitrary terror"; but Dio's claim that in fact, "there was nothing but slaughter" is undermined by evidence that most senators managed to survive Caligula's reign with their persons and fortunes intact. Caligula had not, after all, destroyed Tiberius' records of treason trials. He reviewed them and decided that numerous senators discharged from Tiberius' court hearings seemed to have been guilty of conspiracy all along, against emperor and state – the worst form of "maiestas" (treason). Tiberius' treason trials had encouraged professional "delatores" (informers), who were loathed by the populace, but many of the accused had testified against each other, and against Caligula's own family, even to the point of initiating the prosecutions themselves. If they had acted against Caligula's family, they might act against Caligula himself. New investigations were launched; Dio names five once-trusted, consular senators tried for "maiestas", but his allegation that senators or others were put to death in "great numbers" is unsupported. Two of the five prospered under his rule, and beyond. Caligula preferred to publicly humiliate his enemies in the senate, especially those of ancient families, by stripping them of their inherited honours, dignities and titles. In early September, he dismissed the two suffect consuls, citing their inadequate, low-key celebration of his birthday (31 August) and excessive attention to the anniversary of Actium (2 September). This was the last battle in a damaging civil war between two of Caligula's close ancestors, which he found no cause for celebration. One of the dismissed consuls killed himself: Caligula may have suspected him of conspiracy. Incitatus. Suetonius and Dio outline Caligula's supposed proposal to promote his favourite racehorse, Incitatus ("Swift"), to consul, and later, a priest of his own cult. This could have been an extended joke, created by Caligula himself in mockery of the senate. A persistent, popular belief that Caligula actually promoted his horse to consul has become "a byword for the promotion of incompetents", especially in political life. It may have been one of Caligula's many oblique, malicious or darkly humorous insults, mostly directed at the senatorial class, but also against himself and his family. Winterling sees it as an insult to the consulars themselves. An aristocrat's highest ambition, the consulship, could be laid open to ruinous competition and at the same time, to ridicule. David Woods believes it unlikely that Caligula meant to insult the post of consul, as he had held it himself. Suetonius, possibly failing to get the joke, presents it as further proof of Caligula's insanity, adding circumstantial details more usually expected of the senatorial nobility, including palaces, servants and golden goblets, and invitations to banquets. Bridge at Baiae. In 39 or 40, by Suetonius' reckoning, Caligula ordered a temporary floating bridge to be built using a double line of ships as pontoons, earth-paved and stretching for over two miles from the resort of Baiae, near Naples, to the neighbouring port of Puteoli, with resting places between. Some ships were built on site but grain ships were also requisitioned, brought to site, secured and temporarily resurfaced. Any practical purpose for the bridge is unclear; Winterling believes that it might have been intended to mark Caligula's attempted invasion of Britain. A two-day ceremonial was performed, with offerings to the sea-god Neptune and Invidia (Envy), and a satisfactory result, in that the sea remained completely calm. The bridge was said to rival the Persian king Xerxes' pontoon bridge across the Hellespont. For the opening ceremony, Caligula donned the supposed breastplate of Alexander the Great, and rode his favourite horse, Incitatus, across the bridge, perhaps defying a prediction, attributed by Suetonius to Tiberius' soothsayer Thrasyllus of Mendes, that Caligula had "no more chance of becoming emperor than of riding a horse across the Bay of Baiae". On the second day, he rode the bridge from end to end several times "at full tilt", accompanied by the soldiery, famous nobles and hostages. Seneca and Dio claim that grain imports were dangerously depleted by Caligula's re-purposing of Rome's grain ships as pontoons. Barrett finds these accusations absurd; if the bridge was finished in 39, that was far too early to have had any effect on the annual grain supply, and "a genuine grain crisis was simply blamed on the most outlandish episode at hand." Dio places this episode soon after Caligula's furious denunciation of the Senate; Barrett speculates that Caligula may have intended the whole event as an object lesson on how completely he was in charge: it may also provide "the most striking example of his wasteful extravagance"; its pointlessness might have been the whole point. Provinces. Judaea and Egypt. Caligula's reign saw an increase of tensions between Jews native to their homeland of Judea, Jews of the diaspora, and ethnic Greeks. Greeks and Jews had settled throughout the Roman Empire and Judaea was ruled as a Roman client kingdom. Jews and Greeks had settled in Egypt following its conquest by Macedonian Greeks, and remained there after its conquest by Rome. While the Alexandrian Greeks held citizen status, Alexandrian Jews were classified as mere settlers, with no statutory or citizen rights other than those granted them by their Roman governors. The Greeks feared that official recognition of Jews as citizens would undermine their own status and privilege. Caligula had replaced the prefect of Egypt, Aulus Avilius Flaccus, with Herod Agrippa, who was governor of Batanaea and Trachonitis, and was a personal friend. Flaccus had conspired against Caligula's mother and had connections with Egyptian separatists. In 38, Caligula sent Agrippa to Alexandria unannounced to check on Flaccus. According to Philo, the visit was met with jeers and mockery from the Greek population who saw Agrippa as a gimcrack "king of the Jews.” In Philo's account, a mob of Greeks broke into synagogues to erect statues and shrines of Caligula, against Jewish religious law. Flaccus responded by declaring the Jews "foreigners and aliens", and expelled them from all but one of Alexandria's five districts, where they lived under dreadful conditions. Philo gives an account of various atrocities inflicted on Alexandria's Jews within and around this ghetto by the city's Greek population. Caligula held Flaccus responsible for the disturbances, exiled him, and eventually executed him. In 39, Agrippa accused his uncle Herod Antipas, the tetrarch of Galilee and Perea, of planning a rebellion against Roman rule with the help of Parthia. Herod Antipas confessed, Caligula exiled him, and Agrippa was rewarded with his territories. Riots again erupted in Alexandria in 40 between Jews and Greeks, when Jews who refused to venerate the emperor as a god were accused of dishonouring him. In the Judaean city of Jamnia, resident Greeks built a shoddy, sub-standard altar to the Imperial cult, intending to provoke a reaction from the Jews; they immediately tore it down. This was interpreted as an act of rebellion. In response, Caligula ordered the erection of a statue of himself in the Jewish Temple of Jerusalem, a political, rather than a religious act for Rome, but a blasphemy for the Jews, and in conflict with Jewish monotheism. In this context, Philo wrote that Caligula "regarded the Jews with most especial suspicion, as if they were the only persons who cherished wishes opposed to his". In May of 40, Philo accompanied a deputation of Alexandrian Jews and Greeks to Caligula, and a second deputation after 31 August that year, during the worst of the Alexandrian riots. Neither of these encounters proved decisive. Both gave Caligula ample opportunity for casual, friendly banter, which seems to have included humiliating levity, always at the Jewish delegation's expense; but he made no claims of divinity, either in his dress nor his speech, merely asking at the second encounter, more or less rhetorically, why Jews found his veneration so difficult. Philo and Josephus each saw Caligula's behaviour as driven by his claims to divinity, which for a Jew would have virtually defined him as fundamentally insane, despite appearances otherwise. The ethnically Greek population of Alexandria had already made their loyalty to the new emperor clear, with displays of his image as focus for his cult. The destruction of the altar at Jamlia and, presumably, removal of "idolatrous" images placed in synagogues by Greek citizens, might have been intended as an expression of Jewish religious fervour, rather than a response aimed at one tyrant's offensive claims of personal godhood. Philo seems to have loathed Caligula from the start, but his belief that Caligula hated the Jews and was preparing their destruction has no basis in evidence. To place Caligula's statue in Temple precincts, showing him dressed as Jupiter, would have been consistent with the Empire-wide religious phenomenon known as Imperial cult, from whose full expression Jews had so far been exempted; they could offer prayer "for" the emperor, rather than "to" him; far from a perfect compromise but the highest honour that Jewish tradition permitted in honour of a mortal. Caligula found this most unsatisfactory, and demanded that his statue be installed in the Temple of Jerusalem forthwith. The Governor of Syria, Publius Petronius, ordered a statue from Sidon, then postponed its installation for as long he could, rather than risk a serious Jewish rebellion. In some versions, Caligula proved amenable to rational discussion with Agrippa and Jewish authorities, and faced with threats of rebellion, destruction of property and loss of the grain-harvest if the plan went ahead, abandoned the project. In more hostile versions Caligula, being demonstrably insane, and incapable of rational discussion, impulsively changed his mind once again, and reissued the order to Petronius along with the threat of enforced suicide if he failed. An even larger statue of Caligula-Zeus was ordered from Rome; the ship carrying it was still under way when news of Caligula's death reached Petronius. Caligula's plan was abandoned, Petronius survived and the statue was never installed. Philo reports a rumour that in 40, Caligula announced to the Senate that he planned to move to Alexandria, and rule the Empire from there as a divine monarch, a Roman pharaoh. Very similar rumours attended Julius Caesar's last days, up to his assassination and very much to his discredit. Caligula's ancestor Mark Antony took refuge in Egypt with Cleopatra, and Augustus had made it a so-called "Imperial province", under his direct control. It was the main source of Italy's grain supply, and was administered by members of the equestrian order, directly responsible to the ruling emperor. Egypt was, more or less, Caligula's property, to dispose of as he wished. Roman knowledge of pharaonic brother-sister marriages to maintain the royal bloodline would have shored up the many flimsy, scandalised allegations of adolescent incest between Caligula and Drusilla, supposedly discovered by Antonia but reported as rumour, and only by Suetonius. Barrett finds no further evidence for these allegations, and advises a skeptical attitude. Germany and the Rhine frontier. In late 39 or early 40, Caligula ordered the concentration of military forces and supplies in upper Germany, and made his way there with a baggage train that supposedly included actors, gladiators, women, and a detachment of Praetorians. He might have meant to follow the paths of his father and grandfather, and attack the Germanic tribes along the upper Rhine; but according to ancient historians he was ill-prepared, and retreated in a panic. Modern historians, however, suppose that he had a valid political reason for his Germanic operation, and might even have been successful with that. But the exact locations and enemies of his campaign cannot be determined; possibilities include the Chatti in and around modern-day Hesse or the Suebi east of the Upper Rhine. The ancient sources report that Caligula used the opportunity of his operations in Germany to seize the wealth of rich allies whom he conveniently suspected of treason, "putting some to death on the grounds that they were 'plotting' or 'rebelling'". Caligula accused the Imperial legate, Gaetulicus, of "nefarious plots", and had him executed – according to Dio, he was killed for being popular with his troops. Lepidus, along with Caligula's two sisters, Agrippina and Livilla, was accused of being part of this conspiracy; he too was executed and Caligula's two sisters were exiled after being condemned "pro forma" for adultery. A senatorial embassy arrived from Rome, headed by Caligula's uncle Claudius, to congratulate the emperor for suppressing this latest conspiracy. It met with a hostile reception, in which Claudius was supposedly ducked in the Rhine (though this might have been the loser's award in a contest of Latin and Greek oratory held by Caligula in Gaul that winter). On Caligula's return from the north, he abandoned the theatre seating plans that Augustus had introduced so that rank alone would determine one's place. In the consequent free-for-all, seating was left to chance; doubtless to Caligula's pleasure, fights broke out as senators competed with common citizens for the best seats. Very late in his reign, possibly in its last few days, Caligula sent a communique in preparation for his imminent ovation in Rome, following his military activities in the North and his suppression of Lepidus. He announced that he would only be returning "to those who wanted him back"; to the "Equestrians and the People"; he did not mention the Senate or senators, of whom he had grown increasingly mistrustful. Auctions. In late 39, Caligula wintered at Lugdunum (modern Lyon) in Gaul, where he auctioned off his sisters' portable property, including their jewellery, slaves and freedmen. Dio claims that wealthy bidders at these auctions were willing to offer far more than items were worth; some to show their loyalty, and others to rid themselves of some of the wealth that could render their execution worthwhile. Caligula is said to have used intimidation and various auctioneer's tricks and tactics to boost prices. In an event that Suetonius describes as "well known", a Praetorian gentleman, nodding off to sleep after a gladiator match, woke to find that he had bought 13 gladiators for the vastly over-inflated sum of 9 million sesterces. Caligula's first Lugdunum auction proved such a successful fundraiser that he had many of the furnishings of his palace in Rome carted to Lugdunum and auctioned off; they included many precious family heirlooms. Caligula recited their provenance during the auction, in an attempt to help ensure a fair return on objects intrinsically valuable, and seemingly much sought after by the wealthy for their Imperial associations. Income from this second auction was relatively moderate. Kleijwegt (1996) describes Caligula's performance as vendor and auctioneer at this second auction as "completely out of character with the image of a tyrant". Auctions of Imperial property were acceptable ways to "balance the books", practiced by Augustus and later, by Trajan; they were expected to benefit the bidders as well as the vendor; Roman auctioneers were held in very low esteem, but Kleijwegt claims that Caligula seems to have behaved more like a benevolent "princeps" in this second auction, without malice, greed or intimidation. Britannia. In the spring of 40, Caligula tried to extend Roman rule into Britannia. Two legions had been raised for this purpose, both likely named "Primigeniae" in honour of Caligula's newborn daughter. Ancient sources depict Caligula as being too cowardly to have attacked or as mad, but stories of his threatening a decimation of his troops indicate mutinies. Broadly, "it is impossible to judge why the army never embarked" on the invasion. Beyond mutinies, it may have simply been that British chieftains acceded to Rome's demands, removing any justification for war. Alternatively, it could have been merely a training and scouting mission or a short expedition to accept the surrender of the British chieftain Adminius. Suetonius reports that Caligula ordered his men to collect seashells as "spoils of the sea"; this may also be a mistranslation of , meaning siege engines. The conquest of Britannia was later achieved during the reign of Caligula's successor, Claudius. Mauretania. In 40, Caligula annexed Mauretania, a wealthy, strategically significant client kingdom of Rome, inhabited by fiercely independent semi-nomads who resisted Romanisation. Its ruler, Ptolemy of Mauretania, was a noble descendant of Juba II, popular, extremely wealthy and with a reputation as "feckless and incompetent". Ptolemy failed to deal effectively with an uprising and was removed. The usual fate of incompetent client kings was retirement and a comfortable exile, but Caligula ordered Ptolemy to Rome and had him executed, some time after the spring of 40. His removal proved unpopular enough in Mauretania to provoke an uprising. Rome divided Mauretania into two provinces, Mauretania Tingitana and Mauretania Caesariensis, separated by the river Malua. Pliny claims that division was the work of Caligula, but Dio states that the uprising was subdued in 42 (after Caligula's death), by Gaius Suetonius Paulinus and Gnaeus Hosidius Geta, and the division only took place after this. This confusion might mean that Caligula decided to divide the province, but postponed the division because of the rebellion. The first known equestrian governor of the joint provinces was Marcus Fadius Celer Flavianus, in office in 44. Details on the Mauretanian events of 39–44 are lost, including an entire chapter by Dio on the annexation. Dio and Tacitus suggest that Caligula may have been motivated by fear, envy, and consideration of his own ignominious military performance in the north, rather than pressing military or economic needs. The rebellion of Tacfarinas had shown how exposed Africa Proconsularis was to its west and how the Mauretanian client kings were unable to provide protection to the province, and it is thus possible that Caligula's expansion was a prudent and ultimately successful response to potential future threats. Religion. According to Barrett, "[o]f all the manifestations of wild and extravagant behaviour exhibited by Caligula during his brief reign, nothing has better served to confirm the popular notion of his insanity than his apparent demand to be recognised as a god." Philo, Caligula's contemporary, claims that Caligula costumed himself as various heroes and deities, starting with demigods such as Dionysos, Herakles and the Dioscuri, and working up to major deities such as Mercury, Venus and Apollo. Philo describes these impersonations in a context of private pantomime or theatrical performances he may have witnessed or heard of during his diplomatic visit, as evidence that Caligula wanted to be venerated as a living god. Philo, as a Jew and a monotheist, took this as proof of the emperor's insanity. Caligula's impersonations had a precedent; Augustus had once thrown a party in which he and his guests dressed up as the Olympian gods; Augustus was made up and dressed as Apollo. No-one was thought insane in consequence, and none claimed to be the god they impersonated; but the event was not repeated. It showed near-blasphemous disrespect to the gods in question, and insensitivity to the population at large – the feast was staged during a famine. Coin issues of the official Roman mint, dated to the early 20s BC, show Octavian as Apollo, Jupiter and Neptune. This too may have been thought a transgression, and was not repeated. Caligula took his own impersonations less seriously than some, certainly less seriously than Philo did. According to Dio, when a Gallic shoemaker laughed to see Caligula dressed as Jupiter, pronouncing oracles at the crowd from a lofty place, Caligula asked "and who do you think I am?" The shoemaker answered "a complete idiot". Caligula seems to have appreciated his straightforward honesty. Dio claims that Caligula impersonated Jupiter to seduce various women; that he sometimes referred to himself as a divinity in public meetings; and that he was sometimes referred to as "Jupiter" in public documents. Caligula's special interest in Jupiter as Rome's chief deity is confirmed by all surviving sources. Simpson believes that Caligula may have considered Jupiter an equal, perhaps a rival. According to Ittai Gradel, Caligula's performances as various deities prove no more than a penchant for theatrical fancy-dress and a mischievous desire to shock; as emperor, Caligula was also "pontifex maximus", one of Rome's most powerful and influential state priests. The promotion of mortal rulers to godlike status, to honour their superior standing and perceived merits, was a commonplace phenomenon among Rome's eastern allies and client states; during their eastern tour, Germanicus, Agrippina and their children, including Caligula, were officially received as living deities by several cities of the Greek East. In Roman culture a client could flatter their living patron as "Jupiter on Earth", without reprimand. The "divi" (deceased members of the Imperial family promoted to divine status) were creations of the Senate, who voted them into official existence, appointed their priesthood and granted them cult at state expense. Cicero could protest at the implications of Caesar's divine honours while living but address Publius Lentulus as "parens ac deus" (parent and god) to thank him for his help, as aedile, against the conspirator Catiline. Daily reverence was offered as a matter of course to patrons, heads of household and the powerful by their clients, families and social inferiors. In 30 BC, libation-offerings to the "genius" of Octavian (later Augustus) became a duty at public and private banquets, and from 12 BC, state oaths were sworn by the "genius" of Augustus as the living emperor. Notwithstanding Dio's claims that cult to living emperors was forbidden in Rome itself, there is abundant evidence of municipal cult to Augustus in his lifetime, in Italy and elsewhere, locally organised and financed. As Gradel observes, no Roman was ever prosecuted for sacrificing to his emperor. Caligula seems to have taken his religious duties very seriously. He found a replacement for the aged priest of Diana at Lake Nemi, reorganised the Salii (priests of Mars), and pedantically insisted that as it was "nefas" (religiously improper) for Jupiter's leading priest, the Flamen Dialis, to swear any oath, he could not swear the imperial oath of loyalty. Caligula wished to take over or share the half-finished but splendid Temple of Apollo in Greek Didyma for his own cult. Seemingly, his statue was prepared, but possibly not installed. When Pausanias visited the still-unfinished temple a century later, its cult statue was of Apollo. Suetonius and Dio mention a temple to Caligula in the city of Rome. Most modern scholarship agrees that if such a temple existed, it was probably on the Palatine. Augustus had already linked the Temple of Castor and Pollux directly to his imperial residence on the Palatine, and established an official priesthood of lesser magistrates, the "seviri Augustales", usually drawn from his own freedmen to serve the "genius Augusti" (his "family spirit") and Lares (the twin ancestral spirits of his household). Dio claims that Caligula stationed himself to receive veneration, dressed as Jupiter Latiaris, between the images of Castor and Pollux, the twin Dioscuri, to whom he referred – humorously – as his doorkeepers. Dio's claim that two temples were built for Caligula in Rome, is unconfirmed. Simpson believes it likely that Caligula, voted a temple on the Palatine by the Senate, funded it himself. An embassy from Greek states to Rome greeted Caligula as the "new god Augustus". In the Greek city of Cyzicus, a public inscription from the beginning of Caligula's reign gives thanks to him as a "New Sun-god". Egyptian provincial coinage and some state "dupondii" show Caligula enthroned; the first reigning Roman "princeps" to be described as the "New Sun", () with the radiate crown of the Sun-god, or of Caligula's divine antecedent, the Augustus. Caligula's image on other state coinage carries no such "trappings of divinity". Compared to the full-blown cults to major deities of state, "genius" cults were quite modest in scope. Augustus, once deceased, was officially worshipped as a – immortal, but somewhat less than a full-blown deity; Tiberius, his successor, forbade his own personal cult outright in Rome itself, probably in consideration of Julius Caesar's assassination following his hubristic promotion as a living divinity. Augustus, and after him, Tiberius, insisted that if temples to honour them in the provinces were proposed by the local elite, they must be shared by the "genius of the Senate", or the personification of the Roman people, or the "genius" of Rome itself. Dio claims that Caligula sold priesthoods for his unofficial "genius" cult to the wealthiest nobles, for a "per capita" fee of 10 million sesterces, and made loans available to those who could not afford immediate full payment. His priests supposedly included his wife, Caesonia, and his uncle Claudius, whom Dio claims was bankrupted by the cost. The circumstances mark this out as private cult and personal humiliation among the wealthy elite, not subsidised by the Roman state. Throughout his reign, Caligula seems to have remained popular with the masses, in Rome and the empire. There is no sound evidence that he caused the removal, replacement or imposition of Roman or other deities, or even that he threatened to do so, outside the hostile anecdotes of his biographers. Barrett (2015) asserts that the "emphatic and unequivocal message of the material evidence is that Caligula had no desire for the world to identify him as a god, even if, like most people, he enjoyed being treated like one." He did not demand worship as a living god; but he permitted it when it was offered; Imperial etiquette, and the examples of Augustus and Tiberius, would have him refuse divine honours but thank those who offered them, inferring their status as equal to his. He seems to have taken his own "genius" cult very seriously but his fatal offense was to willfully "insult or offend everyone who mattered", including the military officers who assassinated him. Assassination and aftermath. On 24 January 41, the day before his due departure for Alexandria, Caligula was assassinated by the Praetorian tribunes Cassius Chaerea and Cornelius Sabinus, and a number of centurions. Josephus names many of Caligula's inner circle as conspirators, and Dio seems to have had access to a senatorial version which purported to name many others. More likely, very few conspirators would have been involved, and not all need have been directly in touch with each other. The fewer who knew, the greater the chance of success. Previous attempts had foundered or faded out when faced with the rewards and risks of betrayal by colleagues, whether through torture, fear of torture or promised reward. The Senate was a disunited body of self-interested, wealthy and mistrustful aristocrats, unwilling to risk their own prospects, and determined to present a virtuous, united front. In Josephus' account of Caligula's assassination, Chaerea was a "noble idealist", deeply committed to "Republican liberties"; he was also motivated by resentment of Caligula's routine personal insults and mockery. Suetonius and all other sources confirm that Caligula had insulted Chaerea, giving him watchwords like the ribald "Priapus" or "Venus", the latter said to refer to Chaerea's weak, high voice, and either his soft-hearted attitude when collecting taxes, or his duty to collect the tax on prostitutes. He was also known to do Caligula's "dirty work" for him, including torture. Chaerea, Sabinus and others accosted Caligula as he addressed an acting troupe of young men beneath the palace during a series of games and dramatics being held for the "Divus" Augustus. The source details vary, but all agree that Chaerea was first to stab Caligula. The narrow space available offered little room for escape or rescue, and by the time Caligula's loyal Germanic guard could come to his defence, their Emperor was already dead. They killed several of Caligula's party, including some innocent senators and bystanders. The killing only stopped when the Praetorians took control. Josephus reports that the Senate tried to use Caligula's death as an opportunity to restore the Republic. This would have meant the abolition of the office of emperor, the end of dynastic rule, and restoration of the former social stature and privilege of nobles and senators. At least one senator, Lucius Annius Vinicianus, seems to have thought it an opportunity for a takeover. Some modern scholars believe he was the conspiracy's main instigator. Most ordinary citizens were taken aback by Caligula's murder, and found no cause to celebrate in losing the benefits of his rule. Almost all the named conspirators were from the elite. When Caligula's death was confirmed, the nobles and senators who had prospered through hypocrisy and sycophancy during his reign dared to claim prior knowledge of the plot, and share the credit for its success with their peers. Others sought to distance themselves from anything to do with it. The assassins, fearing continued support for Caligula's family and allies, sought out and murdered Caligula's wife, Caesonia, and their young daughter Julia Drusilla, but were unable to reach Caligula's uncle, Claudius. In the traditional account, a soldier, Gratus, found Claudius hiding behind a palace curtain. A sympathetic faction of the Praetorian Guard smuggled him out to their nearby camp, and nominated him as emperor. The Senate, faced with what now seemed inevitable, confirmed their choice. Caligula's "most powerful and universally feared adviser", the freedman Callistus, may have engineered this succession, having discreetly shifted his loyalty from Caligula to Claudius while Caligula lived. The killing of Caligula had been extralegal, tantamount to regicide, and those who carried it out had broken their oaths of loyalty to him. Claudius, as a prospective replacement for Caligula, could acknowledge his predecessor's failings but could not be seen to condone his murder, or find fault with the principate as an institution. Caligula had been popular with a clear majority of Rome's lesser citizenry, and the Senate could not afford to ignore the fact. Claudius appointed a new Praetorian prefect, and executed Chaerea, a tribune named Lupus, and the centurions involved. He allowed Sabinus to commit suicide. Claudius refused the Senate's requests to formally declare Caligula "hostis" (a public enemy), or condemn his memory (see "damnatio memoriae"). He also turned down a proposal to officially condemn all the Caesars and destroy their temples. Caligula's name was removed from the official lists of oaths and dedications; some inscriptions were removed or obliterated; most of his statues had the heads recut, to resemble Augustus, or Claudius, or in one case, Nero, who would suffer a similar fate. According to Suetonius, Caligula's body was placed under turf until it was burned and entombed by his sisters. Personal life. Caligula's childhood health may have been delicate; Augustus appointed two physicians to accompany his journey north to join his parents, in AD 14; Suetonius connects this to possible childhood bouts of epilepsy. As an adult, he was subject to fainting fits. He was a habitually light sleeper, prone to nodding off during banquets, sleeping no more than 3 hours in any one night, and subject to vivid nightmares. Barrett describes him as "nervous and highly strung". When speaking in public, he would fidget and move about, overcome by the flood of his own words and ideas; despite that, he was an eloquent speaker. He grew stronger with age, but was probably never robust or athletic, despite his practise as a charioteer. Little is known of his illness in 38, nor what it changed, if anything, but it was a serious, possibly life-threatening event. Philo blames it on Caligula's habitual over-indulgence in rich foods and wine, general intemperance and a stress-induced nervous breakdown. Philo believed that the illness removed Caligula's pretence of decency, and revealed his inner cruelty and ruthlessness, evident in the murders of his own father-in-law, Silanus, and young cousin Gemellus. The sources are somewhat contradictory on the matter of Caligula's sex life. Seneca claims that during a public banquet he humiliated senator Decimus Valerius Asiaticus, his "especial friend", with a loud first-hand account of Valerius' wife's disappointing performance in bed. Caligula is said to have had "enormous" sexual appetites, several mistresses and male lovers, but in relation to the alleged "perversions" practised at Capri by Tiberius and, in some sources, shared by Caligula, Barrett finds him remarkably prudish in expelling the so-called "spintriae" from the island on his accession. Caligula's first wife was Junia Claudia, daughter of ex-consul Marcus Junius Silanus. Like most marriages in Rome's upper echelons and, perhaps, all but one of Caligula's four marriages, this was a political alliance, intended to produce a legitimate heir and further Caligula's dynasty. Junia and her baby died in child-birth, less than a year later. Soon after, Macro seems to have persuaded his own wife, Ennia Thrasylla, to take up a sexual affair with Caligula, perhaps to help him through the loss. Suetonius and Dio claim that Caligula met Livia Orestilla at her marriage to Gaius Calpurnius Piso, and abducted her so that he could marry her instead and father a legitimate heir. When she proved faithful to her former husband, Caligula banished her. The Arval Brethren's records confirm her marriage to Piso, but under ordinary Roman custom. Susan Wood dismisses Caligula's "marriage" to her as a drunken party stunt. Caligula's marriage to the "beautiful... very wealthy" and extravagant Lollia Paulina was quickly followed by divorce, on the grounds of her infertility. His fourth and last marriage, to Caesonia, seems to have been a love-match, in which he was both "uxorious and monogamous", and fathered a daughter whom he named Julia Drusilla, in commemoration of his late sister. Caligula's contemporaries could not understand his attraction to Caesonia; she had proved herself fertile in previous marriages but also had a reputation for "high living and low morals", very far from the model of an aristocratic Roman wife. Tales reported by Josephus, Suetonius and the satirist Juvenal regarding Caligula's sexual dynamism are inconsistent with rumours that Caesonia had to arouse his interest with a love potion, which turned his mind and brought on his "madness". Barrett suggests that this rumour might have had no foundation other than Caligula's quip that "he felt like torturing Caesonia to discover why he loved her so passionately. Allegations of incest between Caligula and his sisters, or just he and his favourite, Drusilla, go back no further than Suetonius, who admits that in his own time, they were hearsay. Seneca and Philo, moralising contemporaries of Caligula, do not mention these stories even after Caligula's death, when it would have been safe to do so. Caligula's devotion to his youngest sister was evident but then as now, allegations of incest fit the amoral, "mad Emperor" stereotype, promiscuous with money, sex and the lives of his subjects. Dio repeats, as fact, the rumour that Caligula also had "improper relations" with his two older sisters, Agrippina and Livilla. Mental condition. There is no reliable evidence of Caligula's mental state at any time in his life. Had he been thought truly insane, his misdeeds would not have been thought his fault: Winterling points out that in Roman law, the insane were not legally responsible for their actions, no matter how extreme. Responsibility for their control and restraint fell on those around them. In the course of their narratives, all the primary and contemporary sources give reasons to discredit and ultimately condemn Caligula, for offences against proprieties of class, religion or his role as emperor. "Thus, his acts should be seen from other angles, and the search for 'mad Caligula' abandoned" Barrett suggests that from a very early age, with the loss of his father, then of his mother and what remained of his family, Caligula was preoccupied with his own survival. Given near limitless powers to use as he saw fit, he used them to feed his sense of self-importance, "practically devoid of any sense of moral responsibility, a man for whom the tenure of the principate was little more than an opportunity to exercise power". Caligula "clearly had a highly developed sense of the absurd, resulting in a form of humour that was often cruel, sadistic and malicious, and which made its impact essentially by cleverly scoring points over those who were in no position to respond in kind." Philo saw Caligula's illness of 37 as a form of nervous collapse, a response to the extreme stresses and strains of Imperial rule, for which Caligula was temperamentally ill-equipped. Philo, Josephus and Seneca see Caligula's apparent "insanity" as an underlying personality trait accentuated through self-indulgence and the unlimited exercise of power. Seneca acknowledges that Caligula's promotion to emperor seemed to make him more arrogant, angry and insulting. Several modern sources suggest underlying medical conditions as explanations for some aspects of his behaviour and appearance. They include mania, bipolar disorder, schizophrenia, encephalitis, meningitis, and epilepsy, the so-called "falling sickness". Benediktson refines Suetonius' statement that Caligula could not swim to a diagnosis of interictal temporal lobe epilepsy, and a consequent fear of seizures that prevented his learning to swim. In Romano-Greek medical theory, severe epilepsy attacks were associated with the full moon and the moon goddess Selene, with whom Caligula was claimed to converse and enjoy sexual congress. Suetonius' descriptions of Caligula as physically repulsive are neither reliable nor likely, considering his ecstatic and enthusiastic reception as a youthful "princeps" by the populace. In the ancient world, a person's physique was believed to be a reliable guide to their character and behaviour. Contemporary historiography. Most facts and circumstances of Caligula's reign are lost to history. The two most important literary sources on Caligula and his reign are Suetonius, a government official of equestrian rank, born around 70 AD; and Cassius Dio, a Bithynian senator who held consulships in AD 205 and 229. Suetonius tends to arrange his material thematically, with little or no chronological framework, more biographer than historian. Dio provides a somewhat inconsistent chronology of Caligula's reign. He dedicates 13–21 chapters to positive features of Caligula's reign but nearly 40 to Caligula as "monster". Philo's works "On the Embassy to Gaius" and "Flaccus" give some details on Caligula's early reign, but more on events involving Jews in Judea and Egypt, whose political and religious interests conflicted with those of the ethnically Greek, pro-Roman population. Philo saw Caligula as responsible for the suffering of the Jews, whom he invariably portrays in a morally positive light. Seneca's various works give mostly scattered anecdotes on Caligula's personality, probably written in the reign of Claudius, who had a vested interest in the portrayal of his predecessor as "cruel and despotic, even mad". Seneca was prone to "grovelling flattery" of whoever reigned at the time. His experience under Caligula "could have clouded his judgment". He narrowly avoided a death sentence in AD 39, probably imposed for his association with known conspirators. Caligula had a low opinion of his literary style. Further contemporaneous histories of Caligula's reign are attested by Tacitus, who describes them as biased for or against Caligula; of Tacitus' own work, little of relevance to Caligula survives but Tacitus' works testify to his general hostility to the imperial system. Among the known losses of his works is a substantial portion of the "Annals". Fabius Rusticus and Cluvius Rufus wrote histories, now lost, condemning Caligula. Tacitus describes Fabius Rusticus as a friend of Seneca, prone to embellishments and misrepresentations. Cluvius Rufus was a senator involved in Caligula's assassination; his original works are lost, but he was a competent historian, used as a primary source by Josephus, Tacitus, Suetonius and Plutarch. Caligula's sister, Agrippina the Younger, wrote an autobiography that included a detailed account of Caligula's reign, but it too is lost. Agrippina was banished by Caligula for her connection to Marcus Lepidus, who conspired against him. Caligula also seized the inheritance of Agrippina's son, the future emperor Nero. Gaetulicus flattered Caligula in writings now lost. Suetonius wrote his biography of Caligula 80 years after his assassination, and Cassius Dio over 180 years after; the latter offers a loose chronology. Josephus gives a detailed account of Caligula's assassination and its aftermath, published around 93 AD, but it is thought to draw upon a "richly embroidered and historically imaginative" anonymous biography of Herod Agrippa, presented as a Jewish "national hero". Pliny the Elder's "Natural History" has a few brief references to Caligula, possibly based these on the accounts by his friend Suetonius, or an unnamed, shared source. Of the few surviving sources on Caligula, none paints Caligula in a favourable light. Little has survived on the first two years of his reign, and only limited details on later significant events, such as the annexation of Mauretania, Caligula's military actions in Britannia, and the basis of his feud with the Senate. External links.
6854
44444316
https://en.wikipedia.org/wiki?curid=6854
Church–Turing thesis
In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term "effectively calculable" to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability: Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is "general recursive". This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below). On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the "informal" notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined. Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). Statement in Church's and Turing's words. addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result". In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis "Systems of Logic Based on Ordinals", supervised by Church, are virtually the same: The thesis can be stated as: "Every effectively calculable function is a computable function". Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: History. One of the important problems for logicians in the 1930s was the of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a "definition" that "identified" two or more propositions, (iii) an "empirical hypothesis" to be verified by observation of natural events, or (iv) just "a proposal" for the sake of argument (i.e. a "thesis")? Circa 1930–1952. In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–1935), Gödel proposed "axiomatizing" the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton, New Jersey (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus, Post in his 1936 paper was also discounting Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the " appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). In a proof-sketch added as an appendix to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately". In a few years (1939) Turing would propose, like Church and Kleene before him, that "his" formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be "definitions" of "effective calculability"; neither framed their statements as "theses". Rosser (1939) formally identified the three notions-as-definitions: Kleene proposes "Thesis I": This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I": The Church–Turing Thesis: Stephen Kleene, in "Introduction to Metamathematics", finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability, having switched from presenting his work in the terminology of Church–Kleene lambda definability to that of Gödel–Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX". Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone. Later developments. An attempt to better understand the notion of "effective computability" led Robin Gandy (Turing's student and friend) in 1980 to analyze "machine" computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy." His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable." In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a "computor"—"a human computing agent who proceeds mechanically". These constraints reduce to: The matter remains in active discussion within the academic community. The thesis as a definition. The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function. Success of the thesis. Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions ""reckonable" in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) ""canonical" [also called "normal"] "systems"". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function." All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1": Informal usage in proofs. Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive). Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis: In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive. Variations. The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable." The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine. A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be "simultaneously" achieved for a simulation of a Random Access Machine on a Turing machine. If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation." Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation. Philosophical implications. Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings: There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept. Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory. Non-computable functions. One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input "n" and returns the largest number of symbols that a Turing machine with "n" states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method. Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.
6856
1277175900
https://en.wikipedia.org/wiki?curid=6856
Chomsky (surname)
Chomsky (, , , , , "from (Vyoska) / (nearby Brest, now Belarus)") is a surname of Slavic origin. Notable people with the surname include:
6857
7611264
https://en.wikipedia.org/wiki?curid=6857
Computer multitasking
In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking). Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs. Multitasking is a common feature of computer operating systems since at least the 1960s. It allows more efficient use of the computer hardware; when a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface. Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian. Multiprogramming. In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. Multiprogramming is a computing technique that enables multiple programs to be concurrently loaded and executed into a computer's memory, allowing the CPU to switch between them swiftly. This optimizes CPU utilization by keeping it engaged with the execution of tasks, particularly useful when one program is waiting for I/O operations to complete. The Bull Gamma 60, initially designed in 1957 and first released in 1960, was the first computer designed with multiprogramming in mind. Its architecture featured a central memory and a Program Distributor feeding up to twenty-five autonomous processing units with code and data, and allowing concurrent operation of multiple clusters. Another such computer was the LEO III, first released in 1961. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. Cooperative multitasking. Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile. Preemptive multitasking. Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and Multics in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows. Possibly the earliest preemptive multitasking OS available to home users was Microware's OS-9, available for computers based on the Motorola 6809 such as the TRS-80 Color Computer 2, with the operating system supplied by Tandy as an upgrade for disk-equipped systems. Sinclair QDOS on the Sinclair QL followed in 1984, but it was not a big success. Commodore's Amiga was released the following year, offering a combination of multitasking and multimedia capabilities. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. In 1988 Apple offered A/UX as a UNIX System V-based alternative to the Classic Mac OS. In 2001 Apple switched to the NeXTSTEP-influenced Mac OS X. A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications. Real time. Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time. Multithreading. Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as "lightweight processes" because switching between threads does not involve changing the memory context. While threads are scheduled preemptively, some operating systems provide a variant to threads, named "fibers", that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors. Some systems directly support multithreading in hardware. Memory protection. Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security. In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault". In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL. Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software. Memory swapping. Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage. Programming. Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.
6859
34282584
https://en.wikipedia.org/wiki?curid=6859
Chiang Kai-shek
Chiang Kai-shek (31 October 18875 April 1975) was a Chinese politician, revolutionary, and general who led the Republic of China (ROC) from 1928 until his death in 1975. His government was based in mainland China until it was defeated in the Chinese Civil War by the Chinese Communist Party (CCP) in 1949, after which he continued to lead the Republic of China on the island of Taiwan. Chiang served as leader of the Nationalist Kuomintang (KMT) party and the commander-in-chief of the National Revolutionary Army (NRA) from 1926 until his death. Born in Zhejiang, Chiang received a military education in China and Japan and joined Sun Yat-sen's Tongmenghui organization in 1908. After the 1911 Revolution, he was a founding member of the KMT and head of the Whampoa Military Academy from 1924. After Sun's death in 1925, Chiang became leader of the party and commander-in-chief of the NRA, and from 1926 to 1928 led the Northern Expedition, which nominally reunified China under a Nationalist government based in Nanjing. The KMT–CCP alliance broke down in 1927 following the KMT's Shanghai Massacre, starting the Chinese Civil War. Chiang sought to modernise and unify the ROC during the Nanjing decade, although hostilities with the CCP continued. After Japan's invasion of Manchuria in 1931, his government tried to avoid a war while pursuing economic and social reconstruction. In 1936, Chiang was kidnapped by his generals in the Xi'an Incident and forced to form an anti-Japanese Second United Front with the CCP, and between 1937 and 1945 led China in the Second Sino-Japanese War, mostly from the wartime capital of Chongqing. As the leader of a major Allied power, he attended the 1943 Cairo Conference to discuss the terms for Japan's surrender in 1945, including the return of Taiwan, where he suppressed the February 28 uprising in 1947. When World War II ended, the civil war with the CCP (led by Mao Zedong) resumed. In 1949, Chiang's government was defeated and retreated to Taiwan, where he imposed martial law and the White Terror, a campaign of mass political repression; they lasted until 1987 and 1992, respectively. Beginning in 1948, he was re-elected five times by the same Eternal Parliament with six-year terms as President of the ROC, the head of a de facto one-party state, for 25 years until his death. Chiang presided over land reform, economic growth, and crises in the Taiwan Strait in 1954–1955 and again in 1958. He was considered the legitimate leader of China by the United Nations until 1971, when the ROC's seat was transferred to the People's Republic of China. After Chiang's death in 1975, he was succeeded as leader of the KMT by his son Chiang Ching-kuo, who was elected president in following terms by the same parliament since 1978. Chiang is a controversial figure. Supporters credit him with unifying the nation and ending the century of humiliation, leading the resistance against Japan, fostering economic development and promoting Chinese culture in contrast to Mao's Cultural Revolution. He is also credited with safeguarding Forbidden City treasures during the wars with Japan and the CCP, eventually relocating some of the best to Taiwan, where he founded the National Palace Museum. Critics fault him for his early pacifism toward Japan's occupation of Manchuria, flooding of the Yellow River, cronyism and tolerating corruption of the four big families, and his white terror on both mainland China and Taiwan. Names. Like many other Chinese historical figures, Chiang used several names throughout his life. The name inscribed in the genealogical records of his family is Chiang Chou-t‘ai (). This so-called "register name" () is the one by which his extended relatives knew him, and the one he used in formal occasions, such as when he was married. In deference to tradition, family members did not use the register name in conversation with people outside of the family. The concept of a "real" or original name is/was not as clear-cut in China as it is in the Western world. In honor of tradition, Chinese families waited a number of years before officially naming their children. In the meantime, they used a "milk name" (), given to the infant shortly after his birth and known only to the close family. So the name that Chiang received at birth was Chiang Jui-yüan (). In 1903, the 16-year-old Chiang went to Ningbo as a student, and chose a "school name" (). This was the formal name of a person, used by older people to address him, and the one he would use the most in the first decades of his life (as a person grew older, younger generations would use one of the courtesy names instead). Colloquially, the school name is called "big name" (), whereas the "milk name" is known as the "small name" (). The school name that Chiang chose for himself was Zhiqing (, which means "purity of aspirations"). For the next fifteen years or so, Chiang was known as Jiang Zhiqing (Wade–Giles: Chiang Chi-ch‘ing). This is the name by which Sun Yat-sen knew him when Chiang joined the republicans in Guangdong in the 1910s. In 1912, when Chiang was in Japan, he started to use the name Chiang Kai-shek () as a pen name for the articles that he published in a Chinese magazine he founded: "Voice of the Army" (). "Jieshi" is the pinyin romanization of this name, based on Standard Chinese, but the most recognized romanized rendering is "Kai-shek" which is in Cantonese romanization. Because the Republic of China was based in Guangdong (a Cantonese-speaking area), Chiang (who never spoke Cantonese but was a native Wu speaker) became known by Westerners under the Cantonese romanization of his courtesy name, while the family name as known in English seems to be the Mandarin pronunciation of his Chinese family name, transliterated in Wade–Giles. "Kai-shek" soon became Chiang's courtesy name (). Some think the name was chosen from the classic Chinese book the "I Ching"; , is the beginning of line 2 of Hexagram 16, "". Others note that the first character of his courtesy name is also the first character of the courtesy name of his brother and other male relatives on the same generational line, while the second character of his courtesy name "shi" (—meaning "stone") suggests the second character of his "register name" "tai" (—the famous Mount Tai). Courtesy names in China often bore a connection with the personal name of the person. As the courtesy name is the name used by people of the same generation to address the person, Chiang soon became known under this new name. Sometime in 1917 or 1918, as Chiang became close to Sun Yat-sen, he changed his name from Jiang Zhiqing to Jiang Zhongzheng (). By adopting the name Chung-cheng, he was choosing a name very similar to the name of Sun Yat-sen, who is known among Chinese as Zhongshan (—meaning "central mountain"), thus establishing a link between the two. The meaning of uprightness, rectitude, or orthodoxy, implied by his name, also positioned him as the legitimate heir of Sun Yat-sen and his ideas. It was readily accepted by members of the Kuomintang, and is the name under which Chiang is still commonly known in Taiwan. Often the name is shortened to "Chung-cheng" only. Many public places in Taiwan are named Chungcheng after Chiang. For many years passengers arriving at the Chiang Kai-shek International Airport were greeted by signs in Chinese welcoming them to the "Chung Cheng International Airport". Similarly, the monument erected to Chiang's memory in Taipei, known in English as Chiang Kai-shek Memorial Hall, was named "Chung Cheng Memorial Hall" in Chinese. In Singapore, Chung Cheng High School was named after him. His name is also written in Taiwan as "The Late President Honorable Chiang" (), where the one-character-wide space in front of his name known as Nuo tai shows respect. He is often called "Honorable Chiang". In this context, his surname "Chiang" in this article is spelled using the Wade–Giles system of transliteration for Standard Chinese as opposed to Hanyu Pinyin though the latter was adopted by the Republic of China government in 2009 as its official romanization. Early life. Chiang was born on 31 October 1887, in Xikou, a town in Fenghua, Zhejiang, about west of central Ningbo. He was born into a family of Wu Chinese-speaking people with their ancestral home—a concept important in Chinese society—in , a town in Yixing, Jiangsu, about southwest of central Wuxi and from the shores of Lake Tai. He was the third child and second son of his father (also Chiang Su-an; 1842–1895; ) and the first child of his father's third wife (1863–1921; ) who were members of a prosperous family of salt merchants. Chiang's father died when he was eight, and he wrote of his mother as the "embodiment of Confucian virtues". The young Chiang was inspired throughout his youth by the realization that the reputation of an honored family rested upon his shoulders. He was a naughty child. At a young age he was interested in the military. As he grew older, Chiang became more aware of the issues that surrounded him and in his speech to the Kuomintang in 1945 said: In early 1906, Chiang cut off his queue, the required hairstyle of men during the Qing dynasty, and had it sent home from school, shocking the people in his hometown. Education in Japan. Chiang grew up at a time in which military defeats, natural disasters, famines, revolts, unequal treaties and civil wars had left the Manchu-dominated Qing dynasty unstable and in debt. Successive demands of the Western powers and Japan since the Opium War had left China owing millions of taels of silver. During his first visit to Japan to pursue a military career from April 1906 to later that year, he describes himself as having strong nationalistic feelings with a desire, among other things, to 'expel the Manchu Qing and to restore China'. In a 1969 speech, Chiang related a story about his boat trip to Japan at nineteen years old. Another passenger on the ship, a Chinese fellow student who was in the habit of spitting on the floor, was chided by a Chinese sailor who said that Japanese people did not spit on the floor, but instead would spit into a handkerchief. Chiang used the story as an example of how the common man in 1969 Taiwan had not developed the spirit of public sanitation that Japan had. Chiang decided to pursue a military career. He began his military training at the Baoding Military Academy in 1906, the same year Japan left its bimetallic currency standard, devaluing the Japanese yen. He left for Tokyo Shinbu Gakko, a preparatory school for the Imperial Japanese Army Academy intended for Chinese students, in 1907. There, he came under the influence of compatriots to support the revolutionary movement to overthrow the Manchu-dominated Qing dynasty and to set up a Han-dominated Chinese republic. He befriended Chen Qimei, and in 1908 Chen brought Chiang into the Tongmenghui, an important revolutionary brotherhood of the era. Finishing his military schooling at Tokyo Shinbu Gakko, Chiang served in the Imperial Japanese Army from 1909 to 1911. Returning to China. After learning of the Wuchang Uprising, Chiang returned to China in 1911, intending to fight as an artillery officer. He served in the revolutionary forces, leading a regiment in Shanghai under his friend and mentor Chen Qimei, as one of Chen's chief lieutenants. In early 1912 a dispute arose between Chen and Tao Chengzhang, an influential member of the Revolutionary Alliance who opposed both Sun Yat-sen and Chen. Tao sought to avoid escalating the quarrel by hiding in a hospital, but Chiang discovered him there. Chen dispatched assassins. Chiang may not have taken part in the assassination, but would later assume responsibility to help Chen avoid trouble. Chen valued Chiang despite Chiang's already legendary temper, regarding such bellicosity as useful in a military leader. Chiang's friendship with Chen Qimei signaled an association with Shanghai's criminal syndicate (the Green Gang headed by Du Yuesheng and Huang Jinrong). During Chiang's time in Shanghai, the Shanghai International Settlement police observed him and eventually charged him with various felonies. These charges never resulted in a trial, and Chiang was never jailed. Chiang became a founding member of the Nationalist Party (a forerunner of the KMT) after the success (February 1912) of the 1911 Revolution. After the takeover of the Republican government by Yuan Shikai and the failed Second Revolution in 1913, Chiang, like his KMT comrades, divided his time between exile in Japan and the havens of the Shanghai International Settlement. In Shanghai, Chiang cultivated ties with the city's underworld gangs, which were dominated by the notorious Green Gang and its leader Du Yuesheng. On 18 May 1916 agents of Yuan Shikai assassinated Chen Qimei. Chiang then succeeded Chen as leader of the Chinese Revolutionary Party in Shanghai. Sun Yat-sen's political career reached its lowest point during this time—most of his old Revolutionary Alliance comrades refused to join him in the exiled Chinese Revolutionary Party. Establishing the Kuomintang's position. In 1917, Sun Yat-sen moved his base of operations to Guangzhou, where Chiang joined him in 1918. At this time Sun remained largely sidelined; without arms or money, he was soon expelled from the city and exiled again to Shanghai, only to return to Guangdong with mercenary help in 1920. After his return, a rift developed between Sun, who sought to militarily unify China under the KMT, and Guangdong Governor Chen Jiongming, who wanted to implement a federalist system with Guangdong as a model province. On 16 June 1922 Ye Ju, a general of Chen's whom Sun had attempted to exile, led an assault on Guangdong's Presidential Palace. Sun had already fled to the naval yard and boarded the SS "Haiqi", but his wife narrowly evaded shelling and rifle-fire as she fled. They met on the SS "Yongfeng", where Chiang joined them as soon as he could return from Shanghai, where he was ritually mourning his mother's death. For about 50 days, Chiang stayed with Sun, protecting and caring for him and earning his lasting trust. They abandoned their attacks on Chen on 9 August, taking a British ship to Hong Kong and traveling to Shanghai by steamer. Sun regained control of Guangdong in early 1923, again with the help of mercenaries from Yunnan and of the Comintern. Undertaking a reform of the KMT, he established a revolutionary government aimed at unifying China under the KMT. That same year Sun sent Chiang to Moscow, where he spent three months studying the Soviet political and military system. There Chiang met Leon Trotsky and other Soviet leaders, but quickly came to the conclusion that the Russian model of government was not suitable for China. Chiang later sent his eldest son, Chiang Ching-kuo, to study in Russia. After his father's split from the First United Front in 1927, Ching-kuo was retained there, as a hostage until 1937. Chiang wrote in his diary, "It is not worth it to sacrifice the interest of the country for the sake of my son." When Chiang returned in 1924 Sun appointed him Commandant of the Whampoa Military Academy. Chiang resigned after one month in disagreement with Sun's close cooperation with the Comintern, but returned at Sun's demand, and accepted Zhou Enlai as his political commissar. The early years at Whampoa allowed Chiang to cultivate a cadre of young officers loyal to both the KMT and himself. Throughout his rise to power, Chiang also benefited from membership within the nationalist Tiandihui fraternity, to which Sun Yat-sen also belonged, and which remained a source of support during his leadership of the Kuomintang. Rising power. Sun Yat-sen died on 12 March 1925, creating a power vacuum in the Kuomintang. A contest ensued among Wang Jingwei, Liao Zhongkai, and Hu Hanmin. In August, Liao was assassinated and Hu was arrested for his connections to the murderers. Wang Jingwei, who had succeeded Sun as chairman of the Guangdong regime, seemed ascendant but was forced into exile by Chiang following the Canton Coup. The , renamed the "Zhongshan" in Sun's honour, had appeared off Changzhou, the location of the Whampoa Academy, on apparently-falsified orders and amid a series of unusual phone calls trying to ascertain Chiang's location. He initially considered fleeing Guangdong and even booked passage on a Japanese steamer but then decided to use his military connections to declare martial law on 20 March 1926 and to crack down on Communist and Soviet influence over the National Revolutionary Army, the military academy, and the party. The right wing of the party supported him, and Joseph Stalin, anxious to maintain Soviet influence in the area, had his lieutenants agree to Chiang's demands on a reduced Communist presence in the KMT leadership in exchange for certain other concessions. The rapid replacement of leadership enabled Chiang to effectively end civilian oversight of the military after 15 May, though his authority was somewhat limited by the army's own regional composition and divided loyalties. On 5 June 1926, he was named commander-in-chief of the National Revolutionary Army (NRA); on 27 July, he finally launched Sun's long-delayed Northern Expedition, aimed at conquering the northern warlords and bringing China together under the KMT. The NRA branched into three divisions: to the west was the returned Wang Jingwei, who led a column to take Wuhan; Bai Chongxi's column went east to take Shanghai; Chiang himself led in the middle route, planning to take Nanjing before pressing ahead to capture Beijing. However, in January 1927, Wang Jingwei and his KMT leftist allies took the city of Wuhan amid much popular mobilization and fanfare. Allied with a number of Chinese Communists and advised by Soviet agent Mikhail Borodin, Wang declared the national government as having moved to Wuhan. In 1927, when Chiang was setting up the Nationalist government in Nanjing, he was preoccupied with "the elevation of our leader Dr. Sun Yat-sen to the rank of 'Father of our Chinese Republic'. Dr. Sun worked for 40 years to lead our people in the Nationalist cause, and we cannot allow any other personality to usurp this honored position". He asked Chen Guofu to purchase a photograph that had been taken in Japan or 1898. It showed members of the Revive China Society with Yeung Ku-wan as president, in the place of honor, and Sun, as secretary, on the back row, along with members of the Japanese Chapter of the Revive China Society. When told that it was not for sale, Chiang offered a million dollars to recover the photo and its negative, "The party must have this picture and the negative at any price. They must be destroyed as soon as possible. It would be embarrassing to have our Father of the Chinese Republic shown in a subordinate position". On 12 April 1927, Chiang carried out a purge of thousands of suspected Communists and dissidents in Shanghai, and began large-scale massacres across the country collectively known as the "White Terror". During April, more than people were killed in Shanghai. The killings drove most Communists from urban cities and into the rural countryside, where the KMT was less powerful. In the year after April 1927, over 300,000 people died across China in the anti-communist suppression campaigns, executed by the KMT. One of the most famous quotes from Chiang (during that time) was, that he would rather mistakenly kill 1,000 innocent people, than allow one Communist to escape. Some estimates claim the White Terror in China took millions of lives, most of them in rural areas. No concrete number can be verified. Chiang allowed Soviet agent and advisor Mikhail Borodin and Soviet general Vasily Blücher (Galens) to "escape" to safety after the purge. The NRA formed by the KMT swept through southern and central China until it was checked in Shandong, where confrontations with the Japanese garrison escalated into armed conflict. The conflicts were collectively known as the Jinan incident of 1928. Now with an established national government in Nanjing, and supported by conservative allies including Hu Hanmin, Chiang's expulsion of the Communists and their Soviet advisers led to the beginning of the Chinese Civil War. Wang Jingwei's National Government was weak militarily, and was soon ended by Chiang with the support of a local warlord (Li Zongren of Guangxi). Eventually, Wang and his leftist party surrendered to Chiang and joined him in Nanjing. However, the cracks between Chiang and Hu's traditionally Right-Wing KMT faction, the Western Hills Group, began to show soon after the cleansing against the communists, and Chiang later imprisoned Hu. Though Chiang had consolidated the power of the KMT in Nanjing, it was still necessary to capture Beijing to claim the legitimacy needed for international recognition. Beijing was taken in June 1928, from an alliance of the warlords Feng Yuxiang and Yan Xishan. Yan Xishan moved in and captured Beiping on behalf of his new allegiance after the death of Zhang Zuolin in 1928. His successor, Zhang Xueliang, accepted the authority of the KMT leadership, and the Northern Expedition officially concluded, completing Chiang's nominal unification of China and ending the Warlord Era. After the Northern Expedition ended in 1928, Yan, Feng, Li Zongren and Zhang Fakui broke off relations with Chiang shortly after a demilitarization conference in 1929, and together they formed an anti-Chiang coalition to openly challenge the legitimacy of the Nanjing government. In the Central Plains War, they were defeated. Chiang made great efforts to gain recognition as the official successor of Sun Yat-sen. In a pairing of great political significance, Chiang was Sun's brother-in-law. He had married Soong Mei-ling, the younger sister of Soong Ching-ling, Sun's widow, on 1 December 1927. Originally rebuffed in the early 1920s, Chiang managed to ingratiate himself to some degree with Soong Mei-ling's mother by first divorcing his wife and concubines and promising to sincerely study the precepts of Christianity. He read the copy of the Bible that May-ling had given him twice before making up his mind to become a Christian, and three years after his marriage he was baptized in the Soong's Methodist church. Although some observers felt that he adopted Christianity as a political move, studies of his recently opened diaries suggest that his faith was strong and sincere and that he felt that Christianity reinforced Confucian moral teachings. Upon reaching Beijing, Chiang paid homage to Sun Yat-sen and had his body moved to the new capital of Nanjing to be enshrined in a mausoleum, the Sun Yat-sen Mausoleum. In the West and in the Soviet Union, Chiang Kai-shek was known as the "Red General". Movie theaters in the Soviet Union showed newsreels and clips of Chiang. At Moscow, Sun Yat-sen University portraits of Chiang were hung on the walls; and, in the Soviet May Day parades that year, Chiang's portrait was to be carried along with the portraits of Karl Marx, Vladimir Lenin, Joseph Stalin, and other Communist leaders. The United States consulate and other Westerners in Shanghai were concerned about the approach of "Red General" Chiang as his army was seizing control of large areas of the country in the Northern Expedition. Rule. Having gained control of China, Chiang's party remained surrounded by defeated warlords who remained relatively autonomous within their own regions. On 10 October 1928, Chiang was named director of the State Council, the equivalent to President of the country, in addition to his other titles. As with his predecessor Sun Yat-sen, the Western media dubbed him "generalissimo". According to Sun Yat-sen's plans, the KMT was to rebuild China in three steps: military rule, political tutelage, and constitutional rule. The ultimate goal of the KMT revolution was democracy, which was not considered to be feasible in China's fragmented state. Since the KMT had completed the first step of revolution through seizure of power in 1928, Chiang's rule thus began a period of what his party considered to be "political tutelage" in Sun Yat-sen's name. During this so-called Republican Era, many features of a modern, functional Chinese state emerged and developed. From 1928 to 1937, known as the Nanjing decade, various aspects of foreign imperialism, concessions and privileges in China were moderated by diplomacy. The government acted to modernize the legal and penal systems and attempted to stabilize prices, amortize debts, reform the banking and currency systems, build railroads and highways, improve public health facilities, legislate against traffic in narcotics, and augment industrial and agricultural production. Efforts were made to improve education standards, and the national academy of sciences, Academia Sinica, was founded. In an effort to unify Chinese society, the New Life Movement was launched to encourage Confucian moral values and personal discipline. "Guoyu" ("national language") was promoted as the official language, and the establishment of communications facilities (including radio) was used to encourage a sense of Chinese nationalism in a way that had not been possible when the nation lacked an effective central government. Under that context, the Chinese Rural Reconstruction Movement was implemented by some social activists who graduated as professors of the United States with tangible but limited progress in modernizing the tax, infrastructural, economic, cultural, and educational equipment and the mechanisms of rural regions. The social activists actively co-ordinated with the local governments in the towns and villages since the early 1930s. However, the policy was subsequently neglected and canceled by Chiang's government because of rampant wars and the lack of resources after the Japanese War and the civil war. Despite being a conservative, Chiang supported modernization policies such as scientific advancement, universal education, and women's rights. The Kuomintang supported women's suffrage and education and the abolition of polygamy and foot binding. Under Chiang's leadership, the Republic of China government also enacted a women's quota in the parliament, with reserved seats for women. During the Nanjing Decade, average Chinese citizens received education that they had been denied by the dynasties. That increased the literacy rate across China and also promoted the ideals of Tridemism of democracy, republicanism, science, constitutionalism, and Chinese nationalism based on the Dang Guo system of the KMT. Any successes that the Nationalists achieved, however, were met with constant political and military upheavals. Many of the urban areas were now under the control of the KMT, but much of the countryside remained under the influence of weakened-but -undefeated warlords, landlords, and Communists. Chiang often resolved issues of warlord obstinacy through military action, but such action was costly in terms of men and material. The Central Plains War alone nearly bankrupted the Nationalist government and caused almost casualties on both sides. In 1931, Hu Hanmin, an old supporter of Chiang, publicly voiced a popular concern that Chiang's position as both premier and president flew in the face of the democratic ideals of the Nationalist government. Chiang had Hu put under house arrest, but Hu was released after national condemnation. Hu then left Nanjing and supported a rival government in Guangzhou. The split resulted in a military conflict between Hu's Guangdong government and Chiang's Nationalist government. Throughout his rule, complete eradication of the Communists remained Chiang's dream. After he had assembled his forces in Jiangxi, Chiang led his armies against the newly established Chinese Soviet Republic. With help from foreign military advisers such as Max Bauer and Alexander von Falkenhausen, Chiang's Fifth Campaign finally surrounded the Chinese Red Army in 1934. The Communists, tipped off that a Nationalist offensive was imminent, retreated in the Long March during which Mao rose from a mere military official to the most influential leader of the Chinese Communist Party. Some academics and historians have classified Chiang's rule as fascist. The New Life Movement, initiated by Chiang, was based upon Confucianism mixed with Christianity, nationalism, and authoritarianism that have some similarities to fascism. Frederic Wakeman argued that the New Life Movement was "Confucian fascism". Chiang also sponsored the creation of the Blue Shirts Society, in conscious imitation of the Blackshirts in the Italian Fascist Party and the "Sturmabteilung" of the Nazi Party. Its ideology was to expel foreign (Japanese and Western) imperialists from China and to crush communism. Close ties with Nazi Germany also gave the Nationalist government access to German military and economic assistance during the mid-1930s. In a 1935 speech, Chiang stated that "fascism is what China now most needs" and described fascism as the stimulant for a declining society. Mao once derogatorily compared Chiang to Adolf Hitler, referring to him as the "Führer of China". Sino-German relations rapidly deteriorated as Germany grew closer to Japan and almost completely broke down when Japan launched a full-scale invasion of China in 1937, which Germany failed to mediate. However, China did not declare war on Germany, Italy, or even Japan until after the attack on Pearl Harbor in December 1941. Chinese Communists and many conservative anti-communist writers have argued that Chiang was pro-capitalist based on the alliance thesis (the alliance between Chiang and the capitalists to purge the communist and the leftist elements in Shanghai, as well as in the resulting civil war). However, Chiang also antagonized the capitalists of Shanghai by often attacking them and confiscating their capital and assets for government use even while he denounced and fought against communists. Critics have called that "bureaucratic capitalism". Historian Parks M. Coble argues that the phrase "bureaucratic capitalism" is too simplistic to adequately characterize this phenomenon. Instead, he says, the regime weakened all social forces so that the government could pursue policies without being responsible nor responsive to any outside political groups. By defeating any potential challenge to its power, government officials could amass sizable fortunes. With that motive, Chiang cracked down pro-communist worker and peasant organizations, as well as rich Shanghai capitalists. Chiang also continued the anti-capitalist rhetoric of Sun Yat-sen and directed the Kuomintang media to attack the capitalists and capitalism openly. He supported government-controlled industries instead. Coble says that the rhetoric had no impact on governmental policy and that its use was to prevent the capitalists from claiming legitimacy within the party or society and to control them and their wealth. Authority within the Nationalist government ultimately lay with Chiang. All major policy changes on military, diplomatic, or economic issues required his approval. According to historian Odd Arne Westad, "no other leader within the [KMT] had the authority to force through even the simplest decisions. The practical power of high-ranking officials like ministers or the head of the Executive Yuan was more closely tied to their relationship with Chiang than with the formal authority of their position". Chiang created multiple layers of power in his administration which he sometimes played off against each other to prevent individuals or cliques from gathering power that could oppose his authority. Contrary to the critique that Chiang was highly corrupt, he was not involved in corruption himself. However his wife, Soong Mei-ling, ignored her family's involvement in corruption. The Soong family embezzled $20 million in the course of the 1930s and the 1940s when the Nationalist government's revenues were less than $30 million per year. The Soong family's eldest son, T.V. Soong, was the Chinese premier and finance minister, and the eldest daughter, Soong Ai-ling, was the wife of Kung Hsiang-hsi, the wealthiest man in China. The second daughter, Soong Ching-ling, was the wife of Sun Yat-sen, China's founding father. The youngest daughter, Soong Mei-ling, married Chiang in 1927, and following the marriage, both families became intimately connected, which created the "Soong dynasty" and the "Four Families". However, Soong was also credited for her campaign for women's rights in China, including her attempts to improve the education, culture, and social benefits of Chinese women. Critics have said that the "Four Families" monopolized the regime and looted it. The US sent considerable aid to the Nationalist government but soon realized the widespread corruption. Military supplies that were sent appeared on the black market. Significant sums of money that had been transmitted through T. V. Soong, China's finance minister, soon disappeared. President Truman famously referred to the Nationalist leaders, "They're thieves, every damn one of them." He also said, "They stole $750 million out of the billions that we sent to Chiang. They stole it, and it's invested in real estate down in São Paolo and some right here in New York." Soong Mei-ling and Soong Ai-ling lived luxurious lifestyles and held millions in property, clothes, art, and jewelry. Soong Ai-ling and Soong Mei-ling were also the two richest women in China. Despite living a luxurious life for almost her entire life, Soong Mei-ling left only a $120,000 inheritance, and the reason is that according to her niece, that she donated most of her wealth when she was still alive. Chiang, requiring support, tolerated corruption with people in his inner circles, as well as high-ranking nationalist officials, but not of lower-ranking officers. In 1934, he ordered seven military officers who embezzled state property to be shot. In another case, several division commanders pleaded with Chiang to pardon a criminal officer, but as soon as the division commanders had left, Chiang ordered him shot. The deputy editor and chief reporter at the Central Daily News, Lu Keng, made headline international news by exposing the corruption of two senior officials, Kong Xiangxi (H. H. Kung) and T. V. Soong. Chiang then ordered a thorough investigation of the Central Daily News to find the source. However, Lu risked execution by refusing to comply and protecting his journalists. Chiang wanting to avoid an international response and so jailed Lu instead. Chiang realized the widespread problems that corruption was creating, so he undertook several anti-corruption campaigns before and after World War II with varying success. Before the war, both campaigns, the Nanjing Decade Cleanup of 1927–1930 and the Wartime Reform Movement of 1944–1947, failed. Both campaigns following World War II and the KMT retreat to Taiwan, the Kuomintang Reconstruction of 1950–1952 and the Governmental Rejuvenation of 1969–1973, succeeded. Chiang, who viewed all of the foreign great powers with suspicion, wrote in a letter that they "all have it in their minds to promote the interests of their own respective countries at the cost of other nations" and saw it as hypocritical for any of them to condemn one another's foreign policy. He used diplomatic persuasion on the United States, Nazi Germany, and the Soviet Union to regain lost Chinese territories, as he viewed all foreign powers as imperialists that were attempting to exploit China. First phase of Chinese Civil War. During April 1931, Chiang Kai-shek attended a national leadership conference in Nanjing with Zhang Xueliang and General Ma Fuxiang during which Chiang and Zhang dauntlessly upheld that Manchuria was part of China in the face of the Japanese invasion. After the Japanese invasion of Manchuria in 1931, Chiang resigned as Chairman of the National Government. He returned shortly afterward and adopted the slogan "first internal pacification, then external resistance." However, his policy of avoiding a frontal war against Japan and prioritizing anti-communist suppression was widely unpopular and provoked nationwide protests. In 1932, while Chiang was seeking first to defeat the Communists, Japan launched an advance on Shanghai and bombarded Nanjing. That disrupted Chiang's offensives against the Communists for a time, but it was the northern factions of Hu Hanmin's Guangdong government (notably the 19th Route Army) that primarily led the offensive against the Japanese during the skirmish. Brought into the NRA immediately after the battle, the 19th Route Army's career under Chiang would be cut short by being disbanded for demonstrating socialist tendencies. In December 1936, Chiang flew to Xi'an to co-ordinate a major assault on the Red Army and the CPC, which had retreated into Yan'an. However, Chiang's allied commander Zhang Xueliang, whose forces were used in his attack and whose homeland of Manchuria had been recently invaded by the Japanese, did not support the attack on the Communists. On 12 December, Zhang and several other Nationalist generals, headed by Yang Hucheng of Shaanxi kidnapped Chiang for two weeks in what is known as the Xi'an Incident. They forced Chiang into making a "Second United Front" with the Communists against Japan. After releasing Chiang and returning to Nanjing with him, Zhang was placed under house arrest, and the generals who had assisted him were executed. The Second United Front had a commitment by Chiang that was nominal at best and was all but dissolved in 1941. Second Sino-Japanese War. The Second Sino-Japanese War broke out in July 1937, and in August, Chiang sent of his best-trained and equipped soldiers to defend Shanghai. With over 200,000 Chinese casualties, Chiang lost the political cream of his Whampoa-trained officers. Although Chiang lost militarily, the battle dispelled Japan's claims that it could conquer China in three months and also demonstrated to the Western powers that the Chinese would continue the fight. By December, the capital city of Nanjing had fallen to the Japanese resulting in the Nanjing Massacre. Chiang moved the government inland first to Wuhan and later to Chongqing. Having lost most of China's economic and industrial centers, Chiang withdrew into the hinterlands, stretched the Japanese supply lines, and bogged down Japanese soldiers in the vast Chinese interior. As part of a policy of protracted resistance, Chiang authorized the use of scorched-earth tactics, which resulted in many civilian deaths. During the Nationalists' retreat from Zhengzhou, the dams around the city were deliberately destroyed by the National Revolutionary Army to delay the Japanese advance, and the subsequent 1938 Yellow River flood killed 800,000 to one million people. Four million Chinese were left homeless. Chiang and the KMT were slow to provide disaster relief. After heavy fighting, the Japanese occupied Wuhan in the fall of 1938, and the Nationalists retreated farther inland to Chongqing. En route to Chongqing, the Nationalist Army intentionally started the Changsha Fire as a part of its scorched-earth policy. The fire destroyed much of the city, killed 20,000 civilians, and left hundreds of thousands of people homeless. An organizational error (it was claimed) caused the fire to be started without any warning to the residents of the city. The Nationalists eventually blamed three local commanders for the fire and executed them. Newspapers across China blamed the fire on (non-KMT) arsonists, but the blaze contributed to a nationwide loss of support for the KMT. In 1939, the Muslim leaders Isa Yusuf Alptekin and Ma Fuliang were sent by Chiang to several Middle Eastern countries, including Egypt, Turkey, and Syria, to gain support for the war against Japan and to express his support for Muslims. The Japanese, controlling the puppet state of Manchukuo and much of China's eastern seaboard, appointed Wang Jingwei as a puppet ruler of the occupied Chinese territories around Nanjing. Wang named himself President of the Executive Yuan and chairman, and he led a surprisingly large minority of anti-Chiang and anti-Communist Chinese against his old comrades. He died in 1944, a year before the end of World War II. The Hui Xidaotang sect pledged allegiance to the Kuomintang after the party's rise to power, and Hui general Bai Chongxi acquainted Chiang with the Xidaotang Juaozhu Ma Mingren in 1941 in Chongqing. In 1942 Chiang went on tour in northwestern China in Xinjiang, Gansu, Ningxia, Shaanxi, and Qinghai, where he met the Muslim Generals Ma Buqing and Ma Bufang. He also met the Muslim Generals Ma Hongbin and Ma Hongkui separately. A border crisis erupted with Tibet in 1942. Under orders from Chiang, Ma Bufang repaired Yushu Airport to prevent Tibetan separatists from seeking independence. Chiang also ordered Ma Bufang to put his Muslim soldiers on alert for an invasion of Tibet in 1942. Ma Bufang complied and moved several thousand troops to the Tibetan border. Chiang also threatened the Tibetans with aerial bombardment if they worked with the Japanese. Ma Bufang attacked the Tibetan Buddhist Tsang monastery in 1941. He also constantly attacked the Labrang Monastery. After the attack on Pearl Harbor and the opening of the Pacific War, China became one of the Allies. During and after World War II, Chiang and his American-educated wife, Soong Mei-ling, known in the United States as "Madame Chiang", held the support of the American China Lobby, which saw in them the hope of a Christian and democratic China. Chiang was even named the Supreme Commander of Allied forces in the China war zone. He was appointed Knight Grand Cross of the Order of the Bath in 1942. General Joseph Stilwell, an American military advisor to Chiang during World War II, strongly criticized Chiang and his generals for what Stilwell saw as their incompetence and corruption. In 1944, the United States Army Air Corps commenced Operation Matterhorn to bomb Japan's steel industry from bases to be constructed in mainland China. That was meant to fulfill US President Franklin D. Roosevelt's promise to Chiang to begin bombing operations against Japan by November 1944. However, Chiang's subordinates refused to take air base construction seriously until enough capital had been delivered to permit embezzlement on a massive scale. Stilwell estimated that at least half of the $100 million spent on construction of air bases was embezzled by Nationalist party officials. The poor performance of Nationalist forces during the Japanese Ichigo campaign contributed to the view that Chiang was incompetent. Their poor performance irreparably damaged Chiang and the Nationalists in the view of the Roosevelt administration. Chiang argued that the United States, and Stillwell in particular, were at fault for the failure because they had moved too many Chinese troops into the Burma campaign. After the Japanese surrender, Chiang had to rely on the assistance of the United States in order to transport his troops to regain control of occupied areas. Non-Chinese found the behavior of these troops and accompanying officials as undercutting Nationalist legitimacy, as Nationalist forces engaged in a "botched liberation" characterized by corruption, looting, and inefficiency. Chiang tried to balance the influence of the Soviets and the Americans in China during the war. He first told the Americans that they would be welcome in talks between the Soviet Union and China and then secretly told the Soviets that the Americans were unimportant and that their opinions would not be considered. Chiang also used American support and military power in China against Soviet ambitions to dominate the talks. That stopped the Soviets from taking full advantage of the situation in China by the threat of American military action against them. Chiang's Nationalist government made laws on abortion in China more restrictive during the Second Sino-Japanese War. French Indochina. President Roosevelt, through General Stilwell, privately made it clear that he preferred for the French not to reacquire French Indochina (now Vietnam, Cambodia and Laos) after the war was over. Roosevelt offered Chiang control of all of Indochina. It was said that Chiang replied in English, "Under no circumstances!" After the war, 200,000 Chinese troops under General Lu Han were sent by Chiang to northern Indochina (north of the 16th parallel) to accept the surrender of Japanese occupying forces there, and the Chinese forces remained in Indochina until 1946, when the French returned. The Chinese used the VNQDD, the Vietnamese branch of the Kuomintang, to increase their influence in Indochina and to put pressure on their opponents. Chiang threatened the French with war in response to maneuvering by the French and Ho Chi Minh's forces against each other and forced them to come to a peace agreement. In February 1946, he also forced the French to surrender all of their concessions in China and to renounce their extraterritorial privileges in exchange for the Chinese withdrawing from northern Indochina and allowing French troops to reoccupy the region. After France's agreement to those demands, 20,000 French soldiers landed in Haiphong, North Vietnam, on 6 March 1946, under the leadership of general Philippe Leclerc de Hauteclocque, followed by the withdrawal of Chinese troops which began in March 1946. Ryukyus. According to Republic of China's notes of a dinner meeting during the Cairo Conference in 1943, Roosevelt asked Chiang whether China desired the Ryukyu Islands as territories restored from Japan. Chiang said he would be agreeable to joint occupation and administration by China and the United States. Second phase of Chinese Civil War. Treatment and use of Japanese soldiers. Because of Chiang's focus on his communist opponents, he allowed some Japanese forces and forces from the Japanese puppet regimes to remain on duty in occupied areas in an effort to prevent the communists from accepting their surrender. American troops and weapons soon bolstered the Nationalist forces, which allowed them to reclaim the cities. The countryside, however, remained largely under Communist control. Chiang implemented his war-time phrase "repay evil with good" and made a huge effort to protect elements of the Japanese invading army. In 1949, a Nationalist court acquitted General Okamura Yasuji, the chief commander of Japanese forces in China, of alleged war crimes, retaining him as an advisor. Nationalist China repeatedly intervened to protect Okamura from repeated American requests to testify at the Tokyo war crimes trial. Many top Nationalist generals, including Chiang, had studied and trained in Japan before the Nationalists had returned to the mainland in the 1920s and maintained close personal friendships with top Japanese officers. The Japanese general in charge of all forces in China, General Okamura, had personally trained officers who later became generals in Chiang's staff. Reportedly, Chiang seriously considered accepting this offer but declined only because he knew that the United States would certainly be outraged by the gesture. Even so, armed Japanese troops remained in China well into 1947, with some non-commissioned officers finding their way into the Nationalist officer corps. The Japanese in China came to regard Chiang as a magnanimous figure to whom many of them owed their lives and livelihoods; that fact was attested by both Nationalist and Communist sources. Conditions during Chinese Civil War. Chiang did not de-mobilize his troops after the defeat of the Japanese, instead remaining on a war footing to prepare for the resumption of civil war against the Communists. This further strained the economy of Nationalist-era China, worsening deficits. A significant body of evidence suggests that much of the Nationalist military budget in this period was wasted. One factor in military budget waste included that troop counts were inflated above actual head counts and that officers embezzled the salaries of the non-existent soldiers. Another was the power of military commanders over local branches of the Bank of China, which they could require to provide currency outside of the normal budget process. Although Chiang had achieved status abroad as a world leader, his government deteriorated as the result of corruption and hyperinflation. In his diary in June 1948, Chiang wrote that the KMT had failed not because of external enemies but because of rot from within. The war had severely weakened the Nationalists, and the Communists were strengthened by their popular land reform policies and by a rural population that supported and trusted them. The Nationalists initially had superiority in arms and men, but their lack of popularity, infiltration by Communist agents, low morale, and disorganization soon allowed the Communists to gain the upper hand in the civil war. After World War II, the United States encouraged peace talks between Chiang and the Communist leader, Mao Zedong, in Chongqing. Concerns about widespread and well-documented corruption in Chiang's government throughout his rule made the US government limit aid to Chiang for much of the period of 1946 to 1948 despite the fighting against Mao's Red Army. Alleged infiltration of the US government by CCP agents may have also played a role in the suspension of American aid. Chiang's right-hand man, the secret police chief Dai Li, was anti-American and anti-Communist and a self-declared fascist. Dai ordered Kuomintang agents to spy on American officers. Earlier, Dai had been involved with the Blue Shirts Society, a fascist-inspired paramilitary group within the Kuomintang that wanted to expel Western and Japanese imperialists, crush the Communists, and eliminate feudalism. Dai Li died in a plane crash, which some suspect to be an assassination orchestrated by Chiang; however, the assassination was also rumoured to have been arranged by the American Office of Strategic Services because of Dai's anti-Americanism and since it happened on an American plane. Conflict with Li Zongren. A new constitution was promulgated in 1947, and Chiang was elected by the National Assembly as the first President of the Republic of China on 20 May 1948. That marked the beginning of what was termed the "democratic constitutional government" period by the KMT political orthodoxy, but the Communists refused to recognize the new Constitution, and its government as legitimate. Chiang resigned as president on 21 January 1949, as Nationalist forces suffered terrible losses and defections to the Communists. After Chiang's resignation, vice-president Li Zongren became China's acting president. Shortly after Chiang's resignation, the Communists halted their advances and attempted to negotiate the Nationalists' virtual surrender. Li tried to negotiate milder terms to end the civil war but had no success. When it became clear that Li was unlikely to accept Mao's terms, the Communists issued an ultimatum in April 1949 that warned that they would resume their attacks if Li did not agree within five days. Li refused. Li's attempts to carry out his policies faced varying degrees of opposition from Chiang's supporters and were generally unsuccessful. Taylor has noted that Chiang had a superstitious belief in holding Manchuria. After the Nationalist military defeat in the province, Chiang lost faith in winning the war and started to prepare for the retreat to Taiwan. Chiang especially antagonized Li by taking possession of and moving to Taiwan US$200 million of gold and US dollars that belonged to the central government. Li desperately needed them to cover the government's soaring expenses. When the Communists captured the Nationalist capital of Nanjing in April 1949, Li refused to accompany the central government as it fled to Guangdong and instead expressed his dissatisfaction with Chiang by retiring to Guangxi. The former warlord Yan Xishan, who had fled to Nanjing only one month earlier, quickly insinuated himself within the Li-Chiang rivalry and attempted to have Li and Chiang reconcile their differences in the effort to resist the Communists. At Chiang's request, Yan visited Li to convince Li not to withdraw from public life. Yan broke down in tears while he talked of the loss of his home province of Shanxi to the Communists, and he warned Li that the Nationalist cause was doomed unless Li went to Guangdong. Li agreed to return if Chiang surrendered most of the gold and US dollars in his possession that belonged to the central government, and Chiang stopped overriding Li's authority. After Yan communicated those demands and Chiang agreed to comply with them, Li departed for Guangdong. In Guangdong, Li attempted to create a new government composed of both supporters and opponents of Chiang. Li's first choice of premier was Chu Cheng, a veteran member of the Kuomintang who had been virtually driven into exile for his strong opposition to Chiang. After the Legislative Yuan jas rejected Chu, Li was obliged to choose Yan Xishan instead. By then, Yan was well known for his adaptability, and Chiang welcomed his appointment. The conflict between Chiang and Li persisted. Although he had agreed to do so as a prerequisite of Li's return, Chiang refused to surrender more than a fraction of the wealth that he had sent to Taiwan. Without being backed by gold or foreign currency, the money that was issued by Li and Yan quickly declined in value until it became virtually worthless. Although he did not hold a formal executive position in the government, Chiang continued to issue orders to the army, and many officers continued to obey Chiang, rather than Li. The inability of Li to co-ordinate KMT military forces led him to put into effect a plan of defense that he had contemplated in 1948. Instead of attempting to defend all of southern China, Li ordered what remained of the Nationalist armies to withdraw to Guangxi and Guangdong. He hoped that he could concentrate all available defenses on the smaller area, which would be more easily defensible. The object of Li's strategy was to maintain a foothold on the Chinese mainland in the hope that the United States would eventually be compelled to enter the war in China on the Nationalist side. Final Communist advance. Chiang opposed Li's plan of defense because it would have placed most of the troops who were still loyal to Chiang under the control of Li and Chiang's other opponents in the central government. To overcome Chiang's intransigence, Li began ousting Chiang's supporters within the central government. Yan Xishan continued in his attempts to work with both sides, which created the impression among Li's supporters that he was a stooge of Chiang, and those who supported Chiang began to bitterly resent Yan for his willingness to work with Li. Because of the rivalry between Chiang and Li, Chiang refused to allow Nationalist troops loyal to him to aid in the defense of Guangxi and Guangdong. That let Communist forces occupy Guangdong in October 1949. After Guangdong fell to the Communists, Chiang relocated the government to Chongqing, and Li effectively surrendered his powers and flew to New York for treatment of his chronic duodenum illness at the Hospital of Columbia University. Li visited President Truman, and denounced Chiang as a dictator and an usurper. Li vowed that he would "return to crush" Chiang once he returned to China. Li remained in exile and did not return to Taiwan. In the early morning of 10 December 1949, Communist troops laid siege to Chengdu, the last KMT-controlled city in mainland China, where Chiang Kai-shek and his son Chiang Ching-kuo directed the defense at the Chengtu Central Military Academy. Flying out of Chengdu Fenghuangshan Airport, father and son were evacuated to Taiwan via Guangdong on the aircraft "May-ling" and arrived the same day. Chiang Kai-shek would never return to the mainland. Historian Odd Arne Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang had. Also, his search for a powerful centralized government made Chiang antagonize too many interest groups in China. Furthermore, his party was weakened by the war against Japan. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear and cloaked themselves in the cover of Chinese nationalism. Chiang did not reassume the presidency until 1 March 1950. In January 1952, Chiang commanded the Control Yuan, now in Taiwan, to impeach Li in the "Case of Li Zongren's Failure to carry out Duties due to Illegal Conduct" (李宗仁違法失職案). Chiang relieved Li of the position as vice-president of the National Assembly in March 1954. In Taiwan. Preparations to retake the mainland. Chiang moved the government to Taipei, Taiwan, where he resumed his duties as president on 1 March 1950. Chiang was re-elected by the National Assembly to be the President of the Republic of China on 20 May 1954, and again in 1960, 1966, and 1972. He continued to claim sovereignty over all of China, including the territories held by his government and the People's Republic, as well as territory the latter ceded to foreign governments, such as Tuva and Outer Mongolia. In the context of the Cold War, most of the Western world recognized that position, and the ROC represented China in the United Nations and other international organizations until the 1970s. During his presidency on Taiwan, Chiang continued making preparations to take back mainland China. He developed the JROTC army to prepare for an invasion of the mainland and to defend Taiwan in case of an attack by the Communist forces. He also financed armed groups in mainland China, such as Muslim soldiers of the ROC Army who had been left in Yunnan under Li Mi and continued to fight. It was not until the 1980s that those troops were finally airlifted to Taiwan. He promoted the Uyghur Yulbars Khan to governor during the Islamic insurgency on the mainland for resisting the Communists even though the government had already evacuated to Taiwan. He planned an invasion of the mainland in 1962. In the 1950s, Chiang's airplanes dropped supplies to Kuomintang Muslim insurgents in Qinghai, in the traditional Tibetan area of Amdo. Regime in Taiwan. Despite an ostensibly democratic constitution, the government under Chiang was a de facto one-party state, consisting almost completely of mainlanders; the "Temporary Provisions Effective During the Period of Communist Rebellion" greatly enhanced the executive's powers, and the goal of retaking mainland China allowed the KMT to maintain a monopoly on power and to prohibit real parliamentary opposition. The government's official line for the martial law provisions stemmed from the claim that emergency provisions were necessary since the Communists and the Nationalists were still in a state of war. Seeking to promote Chinese nationalism, Chiang's government actively ignored and suppressed local cultural expression and even forbade the use of local languages in mass media broadcasts or during class sessions. As a result of Taiwan's anti-government uprising in 1947, known as the February 28 incident, the KMT-led political repression resulted in the death or the disappearance of up to 30,000 Taiwanese intellectuals, activists, and people suspected of opposition to the KMT. In the aftermath of the retreat to Taiwan, Chiang became increasingly disillusioned with the Kuomintang (KMT), believing that rampant corruption, power-brokering, and factional struggles—particularly the CC Clique, which challenged Chiang's authority—had severely undermined the party's ability to govern effectively. At one point, he considered dissolving the KMT altogether and replacing it with a new party. However, in 1950, he ultimately chose to initiate a major reform effort within the KMT, launching the Party Reform Program (國民黨改造方案) and establishing the Central Reform Committee (中央改造委員會). The committee aimed to emulate aspects of the Chinese Communist Party's organizational structure, seeking to create a highly disciplined, centralized, and people-supporting party apparatus that could exert top-down authoritarian control while incorporating grassroots feedback. The reform plan called for rapid party expansion, increasing membership from 80,000 to 500,000 within five years, and implementing KMT branches within public institutions such as schools. Additionally, Chiang sought to root out corrupt officials and establish a meritocratic system, mandating that government positions be filled primarily by technocrats selected from top universities. The first decades after the Nationalists had moved the seat of government to the province of Taiwan are associated with the organized effort to resist Communism, which was known as the "White Terror"; about 140,000 Taiwanese were imprisoned for their real or perceived opposition to the Kuomintang. Most of those prosecuted were labeled by the Kuomintang as "bandit spies" (匪諜), meaning spies for Chinese Communists, and punished as such or "Taiwanese Separatists" (台獨分子). Under the pretext that new elections could not be held in Communist-occupied constituencies, the National Assembly, Legislative Yuan, and Control Yuan members held their posts indefinitely. The Temporary Provisions also allowed Chiang to remain as president beyond the two-term limit in the Constitution. He was re-elected by the National Assembly as president four times: in 1954, 1960, 1966, and 1972. Believing that corruption and the lack of morals were key reasons that the KMT had lost mainland China to the Communists, Chiang attempted to purge corruption by dismissing members of the KMT who were accused of graft. Some major figures in the previous mainland Chinese government, such as Chiang's brothers-in-law H. H. Kung, T. V. Soong and nephew Chen Lifu, exiled themselves to the United States. Although politically authoritarian and, to some extent, dominated by government-owned industries, Chiang's new Taiwanese state also encouraged economic development, especially in the export sector. A popular sweeping Land Reform Act, as well as American foreign aid during the 1950s, laid the foundation for Taiwan's economic success to become one of the Four Asian Tigers. After retreating to Taiwan, Chiang learned from his mistakes and failures in the mainland and blamed them for failing to pursue Sun Yat-sen's ideals of Tridemism and welfarism. Chiang's land reform more than doubled the land ownership of Taiwanese farmers. It removed the rent burdens on them, with former landowners using the government compensation to become the new capitalist class. He promoted a mixed economy of state and private ownership with economic planning. Chiang also promoted a nine-year free education and the importance of science in Taiwanese education and values. Those measures generated great success, with consistent and strong growth and the stabilization of inflation. After the government of the Republic of China had moved to Taiwan, Chiang Kai-shek's economic policy turned towards to economic liberalism and used Sho-Chieh Tsiang and other liberal economists to promote economic liberalization reforms in Taiwan. However, Taylor has noted that the developmental model of Chiangism in Taiwan still had elements of socialism, and the Gini index of Taiwan was around 0.28 by the 1970s, which was lower than the relatively-egalitarian West Germany. ROC (Taiwan) was one of the most equal countries in the pro-western bloc. Those in the lower 40% of income doubled their share to 22% of the total income, with the upper 20% shrinking their share from 61% to 39%, from the time of Japanese rule. The Chiangist economic model can be seen as a form of dirigisme, with the state playing a crucial role in directing the market economy. Small businesses and state-owned enterprises in Taiwan flourished under the economic model, but the economy did not see the emergence of corporate monopolies, unlike in most other major capitalist countries. After the democratization of Taiwan, it began to slowly drift away from the Chiangist economic policy to embrace a more free market system, as part of the economic globalization process under the context of neoliberalism. Chiang had the personal power to review the rulings of all military tribunals, which during the martial law period tried civilians as well. In 1950, Lin Pang-chun and two other men were arrested on charges of financial crimes and sentenced to 3–10 years in prison. Chiang reviewed the sentences of all three and ordered them executed instead. In 1954, the Changhua monk Kao Chih-te and two others were sentenced to 12 years in prison for providing aid to accused communists. Chiang sentenced them to death after he had reviewed the case. That control over the decision of military tribunals violated the ROC constitution. After Chiang's death, the next president, his son, Chiang Ching-kuo, and Chiang Ching-kuo's successor, Lee Teng-hui, a native Taiwanese, would in the 1980s and 1990s increase native Taiwanese representation in the government and loosen the many authoritarian controls of the early era of ROC control in Taiwan, paving way for the democratization process. Relations with Japan. In 1971, the former Australian opposition leader Gough Whitlam became Prime Minister in 1972, and swiftly relocated the Australian mission from Taipei to Beijing, visited Japan. After meeting with Japanese Prime Minister Eisaku Sato Whitlam observed that the reason that Japan was hesitant to withdraw recognition from the Nationalist government was "the presence of a treaty between the Japanese government and that of Chiang Kai-shek." Sato explained that the continued recognition of Japan towards the Nationalist government was largely because of the personal relationship that various members of the Japanese government felt towards Chiang. This relationship was rooted largely in the generous and lenient treatment of Japanese prisoners-of-war by the Nationalist government in the years immediately after the Japanese surrender in 1945, and was felt especially strongly as a bond of personal obligation by the most senior members who were in power. Although Japan recognized the People's Republic in 1972, shortly after Kakuei Tanaka had succeeded Sato as Prime Minister of Japan, the memory of the relationship was strong enough to be reported by "The New York Times" (15 April 1978) as a significant factor inhibiting trade between Japan and the mainland. There is speculation that a clash between Communist forces and a Japanese warship in 1978 was caused by Chinese anger by Japanese Prime Minister Takeo Fukuda attending Chiang's funeral. Historically, Japan's attempts to normalize its relationship with the People's Republic were met with accusations of ingratitude in Taiwan. Relations with United States. Chiang was suspicious that covert operatives of the United States were plotting a coup against him. In 1950, Chiang Ching-kuo became director of the secret police (Bureau of Investigation and Statistics), which he remained until 1965. Chiang Kai-shek was also suspicious of politicians who were overly friendly to the United States and considered them his enemies. In 1953, seven days after surviving an assassination attempt, Wu Kuo-chen lost his position as governor of Taiwan Province to Chiang Ching-kuo. After fleeing to United States the same year, Wu became a vocal critic of Chiang's family and government. Chiang Ching-kuo, who had been educated in the Soviet Union, initiated Soviet-style military organization in the Republic of China Armed Forces. He reorganized and Sovietized the political officer corps and propagated Kuomintang ideology throughout the military. Sun Li-jen, who had been educated at the American Virginia Military Institute, opposed those practices. Chiang Ching-kuo orchestrated the controversial court-martial and arrest of General Sun Li-jen in August 1955 for plotting a coup d'état with the CIA against his father, Chiang Kai-shek, and the Kuomintang. The CIA allegedly wanted to help Sun take control of Taiwan and declare its independence. Death. In 1975, 26 years after Chiang had come to Taiwan, he died in Taipei at the age of 87. His wife and his eldest son, Premier Chiang Ching‐kuo, were at his bedside. He had suffered a heart attack and pneumonia in the foregoing months, and died from kidney failure aggravated by advanced heart failure on 5 April. Chiang's funeral was held on 16 April. A month of mourning was declared. The response by Japanese media was swift and shaped by a respect for Chiang, who had been trained in Japanese military schools and shared a particular fondness for the Japanese Empire. The Chinese music composer Hwang Yau-tai wrote the "Chiang Kai-shek Memorial Song". In mainland China, however, Chiang's death was met with little apparent mourning, and Communist state-run newspapers gave the brief headline "Chiang Kai-shek Has Died". Chiang's body was put in a copper coffin and temporarily interred at his favorite residence in Cihu, Daxi, Taoyuan. His funeral was attended by dignitaries from many nations, including US Vice President Nelson Rockefeller, South Korean Prime Minister Kim Jong-pil, and two former Japanese prime ministers: Nobusuke Kishi and Eisaku Sato. was established on 5 April. The memorial day was disestablished in 2007. When his son, Chiang Ching-kuo, died in 1988, he was entombed in a separate mausoleum in nearby Touliao. The hope was to have both of them buried at their birthplace in Fenghua when that would be possible. In 2004, Chiang Fang-liang, the widow of Chiang Ching-kuo, asked for both father and son to be buried at Wuzhi Mountain Military Cemetery in Xizhi, Taipei County (now New Taipei City). Chiang's ultimate funeral ceremony became a political battle between the wishes of the state and those of his family. Chiang was succeeded as president by Vice President Yen Chia-kan and as Kuomintang party ruler by his son Chiang Ching-kuo, who retired Chiang Kai-shek's title of Director-General and instead assumed the position of chairman. Yen's presidency was interim; Chiang Ching-kuo, who was the Premier, became president after the end of Yen's term three years later. Cult of personality. Chiang's portrait hung over Tiananmen Square until 1949, when it was replaced with Mao's portrait. Portraits of Chiang were common in private homes and in public on the streets. After his death, the Chiang Kai-shek Memorial Song was written in 1988 to commemorate Chiang Kai-shek. In Cihu, there are several statues of Chiang Kai-shek. Chiang was popular among many people and dressed in plain, simple clothes, unlike contemporary Chinese warlords who dressed extravagantly. Quotes from the Quran and hadith were used by Muslims in the Kuomintang-controlled Muslim publication, the "Yuehua", to justify Chiang Kai-shek's rule over China. When the Muslim general and warlord Ma Lin was interviewed, he was described as having "high admiration for and unwavering loyalty to Chiang Kai-shek". Philosophy. The Kuomintang used traditional Chinese religious ceremonies, and promulgated martyrdom. Kuomintang ideology subserved and promulgated the view that the souls of Party martyrs who died fighting for the Kuomintang, the revolution, and the party founder Sun Yat-sen were sent to heaven. Chiang Kai-shek believed that these martyrs witnessed events on Earth from heaven after their deaths. Unlike Sun's original Tridemist ideology that was heavily influenced by Western enlightenment theorists such as Henry George, Abraham Lincoln, Bertrand Russell, and John Stuart Mill, the traditional Chinese Confucian influence on Chiang's ideology is much stronger. Chiang rejected the Western progressive ideologies of individualism, liberalism, and the cultural aspects of Marxism. Therefore, Chiang is generally more culturally and socially conservative than Sun Yat-sen. Jay Taylor has described Chiang Kai-shek as a revolutionary nationalist and a "left-leaning Confucian-Jacobinist". When the Northern Expedition was complete, Kuomintang Generals led by Chiang Kai-shek paid tribute to Sun's soul in heaven with a sacrificial ceremony at the Xiangshan Temple in Beijing in July 1928. Among the Kuomintang Generals present were the Muslim Generals Bai Chongxi and Ma Fuxiang. Chiang Kai-shek considered both Han Chinese and all ethnic minorities of China, the Five Races Under One Union, as descendants of the Yellow Emperor, the mythical founder of the Chinese nation, and belonging to the Chinese Nation Zhonghua Minzu. He introduced this into Kuomintang ideology which was propagated into the educational system of the Republic of China. Chiang, as a Chinese nationalist and a Confucian, was against the iconoclasm of the May Fourth Movement. Motivated by his sense of nationalism, he viewed some Western ideas as foreign and believed that the great introduction of Western ideas and literature, which the May Fourth Movement promoted, was not beneficial to China. He and Sun criticized the May Fourth intellectuals as corrupting the morals of China's youth. Chiang Kai-shek once said: Historical perception. For some, Chiang's legacy was as a national hero who achieved unification as leader of the Northern Expedition and as the leader against Japan's invasion, and for enduring without major aid, as Chiang called on his countrymen to fight to the "bitter end" until their ultimate victory against Japan in 1945. He was also champion of anti-communism during the formative years of the World Anti-Communist League. During the subsequent Cold War, he was seen as the leader who led Free China and the bulwark against a possible communist invasion. Others see him in a darker light. Chiang was often perceived as "the man who lost China", criticized for his poor military skills, such as issuing unrealistic orders and persistently attempting to fight unwinnable battles, leading to the loss of his best troops. The historian Rudolph Rummel documented that Chiang's decisions led to millions of excess deaths from calamities such as persecution against actual or perceived communists and its conscription of soldiers, confiscation of food, and flooding of downstream regions of the Yellow River during the Second Sino-Japanese War. His government was also accused of being corrupt and allying with known criminals such as Du Yuesheng for political and financial gains, and his critics often accuse him of fascism. In Taiwan, he ruled throughout a period of martial law. Some opponents charge that Chiang's efforts in developing the island were mostly to turn it into a strong base from which to recover mainland China and that he had little regard for the Taiwanese people. Unlike Chiang's son Chiang Ching-kuo, who is respected across the political spectrum, Chiang Kai-shek's image is perceived rather negatively in Taiwan. He was rated the lowest in two opinion polls about the perception of former presidents. His popularity in Taiwan is divided along political lines, enjoying better support in the Kuomintang (KMT) while being widely unpopular among Democratic Progressive Party (DPP) voters and those who blame him for the thousands killed during the February 28 Incident and criticise his dictatorial rule. In contrast, his image has partially improved in mainland China. He had been portrayed as a villain and a "bourgeoisie reactionary lackey" who fought against the "liberation" of China by the communists, but since the 2000s, the media and popular culture have depicted him in a less negative manner. For example, many praised the 2009 movie sponsored by the Chinese Communist Party, "The Founding of a Republic", for moving away from casting Chiang as 'evil' versus Mao and emphasizing instead that the contingencies of war led the communists to victory. In the context of the Second Sino-Japanese War, aspects of Chiang's trip to India, or meeting with Roosevelt and Churchill in Cairo can be viewed positively. The shift also takes into account Chiang's commitment to a unified China and his stance against Taiwanese separatism. Chiang's ancestral home in Fenghua, Zhejiang, has become a museum and tourist attraction. Historian Rana Mitter notes that the displays inside were very positive about Chiang's role during the Second Sino-Japanese War. Mitter further observed that, ironically, today's China is closer to Chiang's vision than to Mao's and wrote, "One can imagine Chiang Kai-shek's ghost wandering round China today nodding in approval, while Mao's ghost follows behind him, moaning at the destruction of his vision". Liang Shuming opined that Chiang Kai-shek's "greatest contribution was to make the CCP successful. If he had been a bit more trustworthy, if his character was somewhat better, the CCP would have been unable to beat him". Some Chinese historians argue that the main determinants for Chiang's defeat were not corruption or the lack of US support, but his decision to start the civil war with 70% of government expenditures in the military, his overestimation of the Nationalist forces equipped with US arms, and the loss of popularity and morales of his soldiers. Other historians argue that his failure was largely caused by external factors outside of Chiang's control. They include the refusal of the Truman administration to support Chiang by withdrawing aid, the foisting of an arms embargo by George C. Marshall, the failed pursuit of a détente between the nationalists and the communists, the American push for a coalition government with the CCP, and the USSR's consistent aid and support for the CCP during the civil war. In recent years, Chiang's image has been somewhat rehabilitated, and he has been increasingly perceived as a man overwhelmed by the events in China, having to fight the communists, Japanese, and provincial warlords simultaneously while trying to reconstruct and unify the country. His sincere, albeit often unsuccessful attempts to build a more powerful and modern nation have been noted by scholars such as Jonathan Fenby, Rana Mitter, and biographer Jay Taylor. Family. Wives. In 1901, in an arranged marriage at age 14, Chiang was married to Mao Fumei, an illiterate villager five years his senior. While married to Mao, Chiang adopted two concubines (concubinage was still a common practice for well-to-do, non-Christian males in China): he took Yao Yecheng (, 1887–1966) as concubine in late 1912 and married Chen Jieru (1906–1971) in December 1921. While he was still living in Shanghai, Chiang and Yao adopted a son, Wei-kuo. Chen adopted a daughter in 1924, named Yaoguang, who later adopted her mother's surname. Chen's autobiography refuted the idea that she was a concubine. Chen claiming that, by the time she married Chiang, he had already divorced Yao, and that Chen was therefore his wife. Chiang and Mao had a son, Ching-kuo. According to the memoirs of Chen Jieru, Chiang's second wife, she contracted gonorrhea from Chiang soon after their marriage. He told her that he acquired this disease after separating from his first wife and living with his concubine Yao Yecheng, as well as with many other women he consorted with. His doctor explained to her that Chiang had sex with her before completing his treatment for the disease. As a result, both Chiang and Chen Jieru believed that they had become sterile; however, a purported miscarriage by Soong Mei-ling in August 1928 would, if it actually occurred, cast serious doubt on whether this was true. Religion and relationships with religious communities. Chiang personally dealt extensively with religions, power figures, and factions in China during his regime. Religious views. Chiang Kai-shek was born and raised as a Buddhist, but became a Methodist upon his marriage to his fourth wife, Soong Mei-ling. It was previously believed that this was a political move, but further studies of his personal diaries suggest that his faith was sincere. Relationship with Muslims. Chiang developed relationships with other generals. Chiang became a sworn brother of the Chinese Muslim general Ma Fuxiang and appointed him to high ranking positions. Chiang addressed Ma Fuxiang's son Ma Hongkui as Shao Yun Shixiong Ma Fuxiang attended national leadership conferences with Chiang during battles against Japan. Ma Hongkui was eventually scapegoated for the failure of the Ningxia Campaign against the Communists, so he moved to the US instead of remaining in Taiwan with Chiang. When Chiang became President of China after the Northern Expedition, he carved out Ningxia and Qinghai out of Gansu province, and appointed Muslim generals as military governors of all three provinces: Ma Hongkui, Ma Hongbin, and Ma Qi. The three Muslim governors, known as Xibei San Ma, controlled armies composed entirely of Muslims. Chiang called on the three and their subordinates to wage war against the Soviet peoples, Tibetans, Communists, and the Japanese. Chiang continued to appoint Muslims as governors of the three provinces, including Ma Lin and Ma Fushou. Chiang's appointments, the first time that Muslims had been appointed as governors of Gansu, increased the prestige of Muslim officials in northwestern China. The armies raised by this "Ma Clique", most notably their Muslim cavalry, were incorporated into the KMT army. Chiang appointed Hui general Bai Chongxi as the Minister of National Defence of the Republic of China, which controlled the ROC military. Chiang also supported the Muslim General Ma Zhongying, whom he had trained at Whampoa Military Academy during the Kumul Rebellion, in a jihad against Jin Shuren, Sheng Shicai, and the Soviet Union during the Soviet Invasion of Xinjiang. Chiang designated Ma's Muslim army as the 36th Division (National Revolutionary Army) and gave his troops KMT flags and uniforms. Chiang then supported Muslim General Ma Hushan against Sheng and the Soviet Union in the Xinjiang War (1937). All Muslim generals commissioned by Chiang in the National Revolutionary Army swore allegiance to him. Several, like Ma Shaowu and Ma Hushan were loyal to Chiang and Kuomintang hardliners. The Ili Rebellion and Pei-ta-shan Incident plagued relations with the Soviet Union during Chiang's rule and caused trouble with the Uyghurs. During the Ili Rebellion and Peitashan incident, Chiang deployed Hui troops against Uyghur mobs in Turfan, and against Soviet Russian and Mongols at Peitashan. During Chiang's rule, attacks on foreigners and ethnic minorities by the allied warlords of the Nationalist government such as the Ma Clique flared up in several incidents. One of these was the Battle of Kashgar where a Muslim army loyal to the Kuomintang massacred 4,500 Uyghurs, and killed several Britons at the British consulate in Kashgar. Hu Songshan, a Muslim Imam, backed Chiang Kai-shek's regime and gave prayers for his government. ROC flags were saluted by Muslims in Ningxia during prayer along with exhortations to nationalism during Chiang's rule. Chiang sent Muslim students abroad to study at places like Al-Azhar University and Muslim schools throughout China that taught loyalty to his regime. The Yuehua, a Chinese Muslim publication, quoted the Quran and hadith to justify submitting to Chiang Kai-shek as the leader of China, and as justification for Jihad in the war against Japan. The Yihewani (Ikhwan al Muslimun a.k.a. Muslim brotherhood) was the predominant Muslim sect backed by the Chiang government during Chiang's regime. Other Muslim sects, like the Xidaotang and Sufi brotherhoods like Jahriyya and Khuffiya were also supported by his regime. The Chinese Muslim Association, a pro-Kuomintang and anti-Communist organization, was set up by Muslims working in his regime. Salafists attempted to gain a foothold in China during his regime, but the Yihewani and Hanafi Sunni Gedimu denounced the Salafis as radicals, engaged in fights against them, and declared them heretics, forcing the Salafis to form a separate sect. Ma Ching-chiang, a Muslim General, served as an advisor to Chiang Kai-shek. Ma Buqing was another Muslim General who fled to Taiwan along with Chiang. His government donated money to build the Taipei Grand Mosque on Taiwan. Relationship with Buddhists and Christians. Chiang had uneasy relations with the Tibetans. He fought against them in the Sino-Tibetan War, and he supported the Muslim General Ma Bufang in his war against Tibetan rebels in Qinghai. Chiang ordered Ma Bufang to prepare his Islamic army to invade Tibet several times, to deter Tibetan independence, and threatened the Tibetans with aerial bombardment. Ma Bufang attacked the Tibetan Buddhist Tsang monastery in 1941. After the war, Chiang appointed Ma Bufang as ambassador to Saudi Arabia. Chiang incorporated Methodist values into the New Life Movement under the influence of his wife. Dancing and Western music were discouraged. In one incident, several youths splashed acid on people wearing Western clothing, although Chiang was not directly responsible for these incidents. Despite being a Methodist, he made reference to the Buddha in his diary, and encouraged the establishment of a Buddhist political party under Master Taixu. According to Jehovah's Witnesses' magazine "The Watchtower", some of their members travelled to Chongqing and spoke to him personally while distributing their literature there during World War II.
6863
48206778
https://en.wikipedia.org/wiki?curid=6863
Compression ratio
The compression ratio is the ratio between the maximum and minimum volume during the compression stage of the power cycle in a piston or Wankel engine. A fundamental specification for such engines, it can be measured in two different ways. The simpler way is the static compression ratio: in a reciprocating engine, this is the ratio of the volume of the cylinder when the piston is at the bottom of its stroke to that volume when the piston is at the top of its stroke. The dynamic compression ratio is a more advanced calculation which also takes into account gases entering and exiting the cylinder during the compression phase. Effect and typical ratios. A high compression ratio is desirable because it allows an engine to extract more mechanical energy from a given mass of air–fuel mixture due to its higher thermal efficiency. This occurs because internal combustion engines are heat engines, and higher compression ratios permit the same combustion temperature to be reached with less fuel, while giving a longer expansion cycle, creating more mechanical power output and lowering the exhaust temperature. However, several engineering constraints limit the practical implementation of very high compression ratios. Higher compression ratios increase peak cylinder pressures and temperatures, requiring stronger engine components and more robust materials to withstand the additional mechanical and thermal stresses. Additionally, high compression ratios make engines more susceptible to knock and detonation, particularly when using lower-octane fuels, which can damage engine components and reduce efficiency. The thermal efficiency gains from increasing compression ratio also diminish beyond approximately 10:1, as increased friction and heat losses begin to offset the thermodynamic benefits. Petrol engines. In petrol (gasoline) engines used in passenger cars for the past 20 years, compression ratios have typically been between 8:1 and 12:1. Several production engines have used higher compression ratios, including: When forced induction (e.g. a turbocharger or supercharger) is used, the compression ratio is often lower than naturally aspirated engines. This is due to the turbocharger or supercharger already having compressed the air before it enters the cylinders. Engines using port fuel-injection typically run lower boost pressures and/or compression ratios than direct injected engines because port fuel injection causes the air–fuel mixture to be heated together, leading to detonation. Conversely, directly injected engines can run higher boost because heated air will not detonate without a fuel being present. Higher compression ratios can make gasoline (petrol) engines subject to engine knocking (also known as "detonation", "pre-ignition", or "pinging") if lower octane-rated fuel is used. This can reduce efficiency or damage the engine if knock sensors are not present to modify the ignition timing. Diesel engine. Diesel engines use higher compression ratios than petrol engines, because the lack of a spark plug means that the compression ratio must increase the temperature of the air in the cylinder sufficiently to ignite the diesel using compression ignition. Compression ratios are often between 14:1 and 23:1 for direct injection diesel engines, and between 18:1 and 23:1 for indirect injection diesel engines. At the lower end of 14:1, NOx emissions are reduced at a cost of more difficult cold-start. Mazda's Skyactiv-D, the first such commercial engine from 2013, used adaptive fuel injectors among other techniques to ease cold start. Other fuels. The compression ratio may be higher in engines running exclusively on liquefied petroleum gas (LPG or "propane autogas") or compressed natural gas, due to the higher octane rating of these fuels. Kerosene engines typically use a compression ratio of 6.5 or lower. The petrol-paraffin engine version of the Ferguson TE20 tractor had a compression ratio of 4.5:1 for operation on tractor vaporising oil with an octane rating between 55 and 70. Motorsport engines. Motorsport engines often run on high-octane petrol and can therefore use higher compression ratios. For example, motorcycle racing engines can use compression ratios as high as 14.7:1, and it is common to find motorcycles with compression ratios above 12.0:1 designed for 95 or higher octane fuel. Ethanol and methanol can take significantly higher compression ratios than gasoline. Racing engines burning methanol and ethanol fuel often have a compression ratio of 14:1 to 16:1. Mathematical formula. In a reciprocating engine, the static compression ratio (formula_1) is the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke. It is therefore calculated by the formula formula_2 where formula_3 can be estimated by the cylinder volume formula: formula_6 where Because of the complex shape of formula_4 it is usually measured directly. This is often done by filling the cylinder with liquid and then measuring the volume of the used liquid. Variable compression ratio engines. Most engines use a fixed compression ratio, however a variable compression ratio engine is able to adjust the compression ratio while the engine is in operation. The first production engine with a variable compression ratio was introduced in 2019. Variable compression ratio is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. This is done to increase fuel efficiency while under varying loads. Variable compression engines allow the volume above the piston at top dead centre to be changed. Higher loads require lower ratios to increase power, while lower loads need higher ratios to increase efficiency, i.e. to lower fuel consumption. For automotive use this needs to be done as the engine is running in response to the load and driving demands. The 2019 Infiniti QX50 is the first commercially available car that uses a variable compression ratio engine. Dynamic compression ratio. The "static compression ratio" discussed above — calculated solely based on the cylinder and combustion chamber volumes — does not take into account any gases entering or exiting the cylinder during the compression phase. In most automotive engines, the intake valve closure (which seals the cylinder) takes place during the compression phase (i.e. after bottom dead centre, BDC), which can cause some of the gases to be pushed back out through the intake valve. On the other hand, intake port tuning and scavenging can cause a greater amount of gas to be trapped in the cylinder than the static volume would suggest. The "dynamic compression ratio" accounts for these factors. The dynamic compression ratio is higher with more conservative intake camshaft timing (i.e. soon after BDC), and lower with more radical intake camshaft timing (i.e. later after BDC). Regardless, the dynamic compression ratio is always lower than the static compression ratio. Absolute cylinder pressure is used to calculate the dynamic compression ratio, using the following formula: formula_10 where formula_11 is a polytropic value for the ratio of specific heats for the combustion gases at the temperatures present (this compensates for the temperature rise caused by compression, as well as heat lost to the cylinder) Under ideal (adiabatic) conditions, the ratio of specific heats would be 1.4, but a lower value, generally between 1.2 and 1.3 is used, since the amount of heat lost will vary among engines based on design, size and materials used. For example, if the static compression ratio is 10:1, and the dynamic compression ratio is 7.5:1, a useful value for cylinder pressure would be 7.51.3 × atmospheric pressure, or 13.7 bar (relative to atmospheric pressure). The two corrections for dynamic compression ratio affect cylinder pressure in opposite directions, but not in equal strength. An engine with high static compression ratio and late intake valve closure will have a dynamic compression ratio similar to an engine with lower compression but earlier intake valve closure.
6865
28481209
https://en.wikipedia.org/wiki?curid=6865
Concordat of Worms
The Concordat of Worms (; ), also referred to as the Pactum Callixtinum or Pactum Calixtinum, was an agreement between the Catholic Church and the Holy Roman Empire which regulated the procedure for the appointment of bishops and abbots in the Empire. Signed on 23 September 1122 in the German city of Worms by Pope Callixtus II and Emperor Henry V, the agreement set an end to the Investiture Controversy, a conflict between state and church over the right to appoint religious office holders that had begun in the middle of the 11th century. By signing the concordat, Henry renounced his right to invest bishops and abbots with ring and crosier, and opened ecclesiastical appointments in his realm to canonical elections. Callixtus, in turn, agreed to the presence of the emperor or his officials at the elections and granted the emperor the right to intervene in the case of disputed outcomes. The emperor was also allowed to perform a separate ceremony in which he would invest bishops and abbots with a sceptre, representing the lands that constituted the temporalities associated with their episcopal see. Background. During the middle of the 11th century, a reformist movement within the Christian Church sought to reassert the rights of the Holy See at the expense of the European monarchs. Having been elected in 1073, the reformist Pope Gregory VII proclaimed several edicts aimed at strengthening the authority of the papacy, some of which were formulated in the "Dictatus papae" of 1075. Gregory's edicts postulated that secular rulers were answerable to the pope and forbade them to make appointments to clerical offices (a process known as investiture). The pope's doctrines were vehemently rejected by Henry IV, the Holy Roman Emperor, who habitually invested the bishops and abbots of his realm. The ensuing conflict between the Empire and the papacy is known as the Investiture Controversy. The dispute continued after the death of Gregory VII in 1084 and the abdication of Henry IV in 1105. Even though Henry's son and successor, the Emperor Henry V, looked towards reconciliation with the reformist movement, no lasting compromise was achieved in the first 16 years of his reign. In 1111, Henry V brokered an agreement with Pope Paschal II at Sutri, whereby he would abstain from investing clergy in his realm in exchange for the restoration of church property that had originally belonged to the Empire. The Sutri agreement, Henry hoped, would convince Paschal to assent to Henry's official coronation as emperor. The agreement failed to be implemented, leading Henry to imprison the pope. After two months of captivity, Paschal vowed to grant the coronation and to accept the emperor's role in investiture ceremonies. He also agreed never to excommunicate Henry. Given that these concessions had been won by force, ecclesiastical opposition to the Empire continued. The following year, Paschal reneged on his promises. Mouzon summit. In January 1118, Pope Paschal died. He was succeeded by Gelasius II, who died in January 1119. His successor, the Burgundian Callixtus II, resumed negotiations with the Emperor with the aim of settling the dispute between the church and the Empire. In the autumn of 1119, two papal emissaries, William of Champeaux and Pons of Cluny, met Henry at Strasbourg, where the emperor agreed in principle to abandon the secular investiture ceremony that involved giving new bishops and abbots a ring and a crosier. The two parties scheduled a final summit between Henry and Callixtus at Mouzon, but the meeting ended abruptly after the emperor refused to accept a short-notice change in Callixtus's demands. The church leaders, who were deliberating their position at a council in Reims, reacted by excommunicating Henry. However, they did not endorse the pope's insistence upon the complete abandonment of secular investiture. The negotiations ended in failure. Historians disagree as to whether Calixtus actually wanted peace or fundamentally mistrusted Henry. Due to his uncompromising position in 1111, Calixtus has been termed an "ultra", and his election to the papacy may indicate that the College of Cardinals saw no reason to show weakness to the emperor. This optimism about victory was founded on the very visible, and very vocal opposition to Henry from within his own nobility, and the cardinals may have seen the emperor's internal weaknesses as an opportunity for outright victory. Further negotiations. After the failure of the Mouzon negotiations, and the disappearance into the horizon of the chances of Henry's unconditional surrender, the majority of the clergy became willing to compromise in order to settle the dispute. The polemic writings and pronouncements that had figured so highly during the Investiture Dispute had died down by this point. Historian Gerd Tellenbach argues that, despite appearances, these years were "no longer marked by an atmosphere of bitter conflict". This was in part the result of the papacy's realization that it could not win two different disputes on two separate fronts, as it had been trying to do. Calixtus had been personally involved in negotiations with the Emperor over the last decade, and his intimate knowledge of the delicate situation made him the perfect candidate for the attempt. The difference between 1119 and 1122, argues Stroll, was not Henry, who had been willing to make concessions in 1119, but Calixtus, who had then been intransigent, but who now was intent upon reaching an agreement". The same sentiment prevailed in much of the German nobility. In 1121, pressured by a faction of nobles from the Lower Rhine and Duchy of Saxony under the leadership of the archbishop Adalbert of Mainz, Henry agreed to submit to make peace with the pope. In response in February 1122, Calixtus wrote to Henry in a conciliatory tone via the Bishop of Acqui. His letter has been described as "a carefully crafted overture". In his letter, Calixtus drew attention to their blood relationship, suggesting that while their shared ancestry compelled them to love each other as brothers, it was fundamental that the German kings draw their authority from God, but via his servants, not directly. However, Calixtus also emphasised for the first time that he blamed not Henry personally for the dispute but his bad advisors who had dictated unsound policy to him. In a major shift in policy since the Council of Reims of 1119, the pope stated that the church gifts what it possesses to all its children, without making claims upon them. This was intended to reassure Henry that in the event of peace between them, his position and Empire were secure. Shifting from the practical to the spiritual, Calixtus next asked Henry to bear in mind that he was a king, but like all men limited on his earthly capability; he had armies, and kings below him, but the church had Christ and the Apostles. Continuing his theme, he referred, indirectly, to Henry's excommunication by himself (twice), he begged Henry to allow the conditions for peace to be created, as a result of which the church's, and God's glory would be increased, as concomitantly would the Emperor's. Conversely, he made sure to include a threat: if Henry did not change his ways, Calixtus threatened to place "the protection of the church in the hands of wise men". Historian Mary Stroll argues that, in taking this approach, Calixtus was taking advantage of the fact that, while he himself "was hardly in a position to sabre rattle" due to his military defeat in the south and his difficulty with his own Cardinals, Henry was also under pressure in Germany in both the military and spiritual spheres. The Emperor replied through the Bishop of Speyer and the Abbot of Fulda, who travelled to Rome and collected the pope's emissaries under the Cardinal Bishop of Ostia. Speyer was a representative of Henry's political opponents in Germany, whereas Fulda was a negotiator rather than politically partisan. Complicating matters was a disputed election to the bishopric of Wurzburg in February 1122 of the kind that was at the heart of the Investiture Dispute. Although this almost led to an outbreak of civil war, a truce was arranged in August, allowing the parties to return to the papal negotiations. In the summer of 1122, a synod was convened in Mainz, at which imperial emissaries concluded the terms of their agreement with representatives of the church. In a sign that the Pope intended the impending negotiations to be successful, a Lateran council was announced for the following year. Worms. The Emperor received the papal legates in Worms with due ceremony, where he awaited the outcome of the negotiations which appear to have actually taken place in nearby Mainz, which was hostile territory to Henry. As such, he had to communicate via messenger to keep up with events. Abbot Ekkehard of Aura chronicles that discussions took over a week to conclude. On 8 September, he met the papal legates and their final agreements were codified for publication. Although a possible compromise solution had already been received from England, this does not seem to have ever been considered in depth, probably on account of it containing an oath of Homage between Emperor and Pope, which had been a historical sticking point in earlier negotiations. The papal delegation was led by Cardinal bishop Lamberto Scannabecchi of Ostia, the future Pope Honorius II. Both sides studied previous negotiations between them, including those from 1111, which were considered to have created precedent. On 23 September 1122, papal and imperial delegates signed a series of documents outside the walls of Worms. There was insufficient room in the city for the number of attendees and watchers. Adalbert, Archbishop of Mainz wrote to Calixtus of how complex the negotiations had been, given that, as he said, Henry regarded the powers he was being asked to renounce as being hereditary in the Imperial throne. It is probable that what was eventually promulgated was the result of almost every word being carefully considered. The main difference between what was to be agreed at Worms and previous negotiations were the concessions from the pope. Concordat. The agreements come to at Worms were in the nature of both concessions and assurances to the other party. Henry, on oath before God, the apostles and the church renounced his right to invest bishops and abbots with ring and crosier, and opened ecclesiastical appointments in his realm to canonical elections, "regno vel imperio". He also recognised the traditional extent and boundaries of the papal patrimony as a legal entity rather than one malleable to the emperor. Henry promised to return to the church those lands rightfully belonging to the church seized by himself or his father to the church; furthermore, he would assist the pope in regaining those that were taken by others, and "he will do the same thing for all other churches and princes, both ecclesiastical and lay". If the pope requested Imperial assistance, he would receive it, and if the church came to the empire for justice, it would be treated fairly. He also swore to abstain from "all investiture by ring and staff", marking the end of an ancient imperial tradition. Callixtus made similar reciprocal promises regarding the empire in Italy. He agreed to the presence of the emperor or his officials at the elections and granted the emperor the right to ajudge in the case of disputed outcomes on episcopal advice—as long as they had been held peacefully and without simony—which had officially been the case ever since precedent had been set by the London Accord of 1107. This right to judge was constrained by an assurance that he would support the majority vote among electors, and further that he would take the advice of his other bishops before doing so. The emperor was also allowed to perform a separate ceremony in which he would invest bishops and abbots with their "regalia", a sceptre representing the imperial lands associated with their episcopal see. This clause also contained a "cryptic" condition that once the elect had been so endowed, the new bishop "should do what he ought to do according to imperial rights". In the German imperial lands this was to take place prior to the bishop-elect's consecration; elsewhere in the empire—Burgundy and Italy, exempting the Papal States—within six months of the ceremony. The differentiating between the German portion of the Empire and the rest was of particular importance to Calixtus as the papacy had traditionally felt threatened more from it in the peninsular than the broader Empire. Finally, the pope granted "true peace" on the emperor and all those who had supported him. Calixtus had effectively overturned wholesale the strategy he had pursued during the Mouzon negotiation; episcopal investitures in Germany were to take place with very little substantive change in ceremony, while temporal involvement remained, only replacing investiture with homage, although the word itself—"hominium"—was studiously avoided. Adalbert, from whom Calixtus first received news of the final concordat, emphasized that it still had to be approved in Rome; this suggests, argues Stroll, that the Archbishop—and probably the papal legation as a whole—were against making concessions to the emperor, and probably wanted Calixtus to disown the agreement. Adalbert believed the agreement would make it easier for the Emperor to legalise intimidation of episcopal electors, writing that "through the opportunity of [the emperor's] presence, the Church of God must undergo the same slavery as before, or an even more oppressive one". However, argues Stroll, the concessions Calixtus made were an "excellent bargain" in return for eradicating the danger on the papacy's northern border and therefore allowing him to focus, without threat or distraction, on the Normans to the south. It had achieved its peace, argues Norman Cantor, by allowing local national custom and practice to determine future relations between crown and pope; in most cases, he notes, this "favored the continuance of royal control over the church". The concordat was published as two distinct charters, each laying out the concessions the one party was making to the other. They are known respectively as the Papal (or the "Calixtinum") and the Imperial ("Henricianum") charters. Calixtus's is addressed to the emperor—in quite personal terms—while Henry's is made out to God. The bishop of Ostia gave the emperor the kiss of peace on behalf of the pope and said Mass. By these rites was Henry returned to the church, the negotiators were lauded for succeeding in their delicate mission and the concordat was called "peace at the will of the pope". Neither charter was signed; both contained probably intentional vagaries and unanswered questions—such as the position of the papacy's churches that lay outside both the patrimony and Germany—which were subsequently addressed on a case-by-case basis. Indeed, Robert Benson has suggested that the brevity of the charters was deliberate and that the agreement as a whole is as important for what it omits as for what it includes. The term "regalia", for example, was not only undefined but literally meant two different things to each party. In the "Henricianum" it referred to the feudal duty owed to a monarch; in the Calixtinium, it was the episcopal temporalities. Broader question, such as the nature of the church and Empire relationship, were also not addressed, although some ambiguity was removed by an 1133 Papal privilege. The Concordat was widely, and deliberately, publicised around Europe. Calixtus was not in Rome when the concordat was delivered. He had left the city by late August and was not to return until mid- to late October, making a progress to Anagni, taking the bishopric of Anagni and Casamari Abbey under his protection. Preservation. The concordat was ratified at the First Council of the Lateran and the original "Henricianum" charter is preserved at the Vatican Apostolic Archive; the "Calixtinum" has not survived except in subsequent copies. A copy of the former is also held in the "Codex Udalrici", but this is an abridged version for political circulation, as it reduces the number of imperial concessions made. Indicating the extent that he saw the agreement as a papal victory, Calixtus had a copy of the "Henricianum" painted on a Lateran Palace chamber wall; while nominally portraying the concordat as a victory for the papacy, it also ignored the numerous concessions made to the emperor. This was part of what Hartmut Hoffmann has called "a conspiracy of silence" regarding papal concessions. Indeed, while the Pope is pictured enthroned, and Henry only standing, the suggestion is still that they were jointly wielding their respective authority to come to this agreement. An English copy of the "Calixtinum" made by William of Malmsbury is reasonably accurate but omits the clause mentioning the use of a sceptre in the granting of the "regalia". He then, having condemned Henry's "Teuton fury", proceeds to praise him, comparing him favourably to Charlemagne for his devotion to God and the peace of Christendom. Aftermath. The first invocation of the concordat was not in the empire, as it turned out, but by Henry I of England the following year. Following a long-running dispute between Canterbury–York which ended up in the Papal court, Joseph Huffman argues that it would have been controversial for the Pope "to justify one set of concessions in Germany and another in England". The concordat ended once and for all the "Imperial church system of the Ottonians and Salians". The First Lateran Council was convoked to confirm the Concordat of Worms. The council was most representative with nearly 300 bishops and 600 abbots from every part of Catholic Europe being present. It convened on March 18, 1123. One of its primary concerns was to emphasise the independence of diocesan clergy, and to do so it forbade monks to leave their monasteries to provide pastoral care, which would in future be the sole preserve of the diocese. In ratifying the Concordat, the Council confirmed that in future bishops would be elected by their clergy, although, also per the Concordat, the Emperor could refuse the homage of German bishops. Decrees were passed directed against simony, concubinage among the clergy, church robbers, and forgers of Church documents; the council also reaffirmed indulgences for Crusaders. These, argues C. Colt Anderson "established important precedents in canon law restricting the influence of the laity and the monks". While this led to a busy period of reform, it was important for those advocating reform not to allow themselves to be confused with the myriad heretical sects and schismatics who were making similar criticisms. The Concordat was the last major achievement for Emperor Henry, as he died in 1125; an attempted invasion of France came to nothing in 1124 in the face of "determined opposition". Fuhrmann comments that, as Henry had shown in his life "even less interest in new currents of thought and feeling than his father", he probably did not understand the significance of the events he had lived through. The peace only lasted until his death; when Imperial Electors met to choose his successor, reformists took the opportunity to attack the imperial gains of Worms on the grounds that they had been granted to him personally rather than Emperors generally. However, later emperors, such as Frederick I and Henry VI, continued to wield as much, if intangible, power as their predecessors in episcopal elections, and to a greater degree to that allowed them by Calixtus' charter. Successive emperors found the Concordat sufficiently favourable that it remained, almost unaltered until the empire was dissolved by Francis II in 1806 on account of Napoleon. Popes, likewise, were able to use the powers codified to them in the Concordatto their advantage in future internal disputes with their Cardinals. Reception. The most detailed contemporary description of the Concordat comes to historians through a brief chronicle known as the 1125 continuation chronicle. This pro-papal document lays the blame for the schism squarely upon Henry—by his recognition of Gregory VIII—and the praise for ending it on Calixtus, through his making only temporary compromises. I. S. Robinson, writing in The New Cambridge Medieval History, suggests that this was a deliberate ploy to leave further negotiations open with a more politically malleable Emperor in future. To others it was not so clear cut; Honorius of Autun, for example, writing later in the century discussed lay investiture as an aspect of papal-Imperial relations and, even a century later the "Sachsenspiegel" still stated that Emperors nominated bishops in Germany. Robinson suggests that, by the end of the 12th century, "it was the imperial, rather than the papal version of the Concordat of Worms that was generally accepted by German churchmen". The contemporary English historian William of Malmesbury praised the Concordat for curtailing what he perceived as the emperor's overreach, or as he put it, "severing the sprouting necks of Teuton fury with the axe of Apostolic power". However, he regarded the final settlement not as a defeat of the Empire at the hands of the church, but rather as a reconciliatory effort by the two powers. Although polemicism had died down in the years preceding the Concordat, it did not finish them completely, and factionalism within the church especially continued. Gerhoh of Reichersberg believed that the emperor now had the right to request German bishops pay homage to him, something that would never have been allowed under Paschal, due to the vague clause instructing newly-elects to the things the emperor wished. Gerhoh argued that now imperial intervention in episcopal elections had been curtailed, Henry would use this clause to extend his influence in the church by means of homage. Gerhoh was torn between viewing the concordat as the end of a long struggle between pope and empire, or whether it marked the beginning of a new one within the church itself. Likewise Adelbert of Mainz—who had casually criticised the agreement in his report to Calixtus—continued to lobby against it, and continued to bring complaints against Henry, whom, for example, he alleged had illegally removed the Bishop of Strassburg who was suspected of complicity in the death of Duke Berthold of Zaehringen. The reformist party within the church took a similar view, criticising the Concordat for failing to remove all secular influence on the church. For this reason, a group of followers of Paschal II unsuccessfully attempted to prevent the agreement's ratification at the Lateran Council, crying "non placet!" when asked to do so: "it was only when it was pointed out that much had to be accepted for the sake of peace that the atmosphere quietened". Calixtus told them that they had "not to approve but tolerate" it. At a council in Bamberg in 1122 Henry gathered those nobles who had not attended the Concordat to seek their approval of the agreement, which they did. The following month he sent cordial letters to Calixtus agreeing with the pope's position that as brothers in Christ they were bound by God to work together, etc., and that he would soon visit personally to discuss the repatriation of papal land. These letters were, in turn, responded to positively by Calixtus, who instructed his delegates to make good the promises they had made at Worms. Historiography. Gottfried Wilhelm Leibniz called the agreements made at Worms "the oldest concordat in German history, an international treaty", while Augustin Fliche argued that the Concordat effectively instituted the statutes of Ivo of Chartres, a prominent reformer in the early years of the Investiture Contest, a view, it has been suggested, that most historians agree with. The historian Uta-Renate Blumenthal writes that, despite its shortcomings, the Concordat freed "[the church and the Empire] from antiquated concepts with their increasingly anachronistic restrictions". According to the historian William Chester Jordan, the Concordat was "of enormous significance" because it demonstrated that the emperor, in spite of his great secular power, did not have any religious authority. On the other hand, argues Karl F. Morrison, any victory the papacy felt it had won was pyrrhic, as "the king was left in possession of the field". The new peace also now allowed the papacy to expand its territories in Italy, such as the Sabina, which were unobtainable while the dispute with Henry was ongoing, while in Germany, a new class of ecclesiastics was created, what Horst Fuhrmann calls the "ecclesiastical princes of the Empire". While most historians agree that the Concordat marks a clear close to the fifty-year-old struggle between church and empire, disagreement continues on just how decisive a termination that was. Historians are also unclear as to the commitment of the pope to the concordat. Stroll, for example, notes that, while Henry's oaths were made to the church corporate, so in perpetuity, while Calixtus's may have been in a personal capacity. This, Stroll argues, would mean that it could be argued that while Henry's commitments to the church applied forever, Calixtus's applied only for the duration of Henry's reign, and at least one contemporary, Otto of Freising, wrote later in the century that he believed this to be the church's position. Stroll considers it "implausible" that Henry and his counsel would ever have entered into such a one-sided agreement. Indeed, John O'Malley has argued that the emperor had effectively been granted a veto from Calixtus; while in the strictest interpretation of the Gregorian reformers the only two important things in the making of a bishop were his election and consecration, Calixtus had effectively codified a role—however small—for the emperor in this process. Conversely, Benson reckons that while Henry's agreement was with the church in perpetuity, Calixtus'—based on the personal mode of address—was with him personally, and as such not binding on his successors. However, this was also an acknowledgement, he suggests, that much of what the pope did not address was already considered customary, and so did not need addressing. There has also been disagreement in why the Investiture contest ended with the Concordat as it did. Benson notes that, as a truce, it was primarily intended to stop the fighting rather than to address its original causes. It was "a straightforward, political engagement...a pragmatic agreement" between two political bodies. Indeed, controversy over investiture continued for at least another decade; in that light, suggests Benson, it could be argued that the Concordat did not end the dispute at all. There were "many problems unsolved, and [it] left much room for the free play of power". Political scientist Bruce Bueno de Mesquita has argued that, in the long term, the Concordat was an essential component to the later—gradual—creation of the European nation state.
6867
1739907
https://en.wikipedia.org/wiki?curid=6867
Context-free language
In formal language theory, a context-free language (CFL), also called a Chomsky type-2 language, is a language generated by a context-free grammar (CFG). Context-free languages have many applications in programming languages, in particular, most arithmetic expressions are generated by context-free grammars. Background. Context-free grammar. Different context-free grammars can generate the same context-free language. Intrinsic properties of the language can be distinguished from extrinsic properties of a particular grammar by comparing multiple grammars that describe the language. Automata. The set of all context-free languages is identical to the set of languages accepted by pushdown automata, which makes these languages amenable to parsing. Further, for a given CFG, there is a direct way to produce a pushdown automaton for the grammar (and thereby the corresponding language), though going the other way (producing a grammar given an automaton) is not as direct. Examples. An example context-free language is formula_1, the language of all non-empty even-length strings, the entire first halves of which are 's, and the entire second halves of which are 's. is generated by the grammar formula_2. This language is not regular. It is accepted by the pushdown automaton formula_3 where formula_4 is defined as follows: formula_5 Unambiguous CFLs are a proper subset of all CFLs: there are inherently ambiguous CFLs. An example of an inherently ambiguous CFL is the union of formula_6 with formula_7. This set is context-free, since the union of two context-free languages is always context-free. But there is no way to unambiguously parse strings in the (non-context-free) subset formula_8 which is the intersection of these two languages. Dyck language. The language of all properly matched parentheses is generated by the grammar formula_9. Properties. Context-free parsing. The context-free nature of the language makes it simple to parse with a pushdown automaton. Determining an instance of the membership problem; i.e. given a string formula_10, determine whether formula_11 where formula_12 is the language generated by a given grammar formula_13; is also known as "recognition". Context-free recognition for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to Boolean matrix multiplication, thus inheriting its complexity upper bound of "O"("n"2.3728596). Conversely, Lillian Lee has shown "O"("n"3−ε) Boolean matrix multiplication to be reducible to "O"("n"3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Practical uses of context-free languages require also to produce a derivation tree that exhibits the structure that the grammar associates with the given string. The process of producing this tree is called "parsing". Known parsers have a time complexity that is cubic in the size of the string that is parsed. Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and Earley's Algorithm. A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser. See also parsing expression grammar as an alternative approach to grammar and parser. Closure properties. The class of context-free languages is closed under the following operations. That is, if "L" and "P" are context-free languages, the following languages are context-free as well: Nonclosure under intersection, complement, and difference. The context-free languages are not closed under intersection. This can be seen by taking the languages formula_22 and formula_23, which are both context-free. Their intersection is formula_24, which can be shown to be non-context-free by the pumping lemma for context-free languages. As a consequence, context-free languages cannot be closed under complementation, as for any languages "A" and "B", their intersection can be expressed by union and complement: formula_25. In particular, context-free language cannot be closed under difference, since complement can be expressed by difference: formula_26. However, if "L" is a context-free language and "D" is a regular language then both their intersection formula_27 and their difference formula_28 are context-free languages. Decidability. In formal language theory, questions about regular languages are usually decidable, but ones about context-free languages are often not. It is decidable whether such a language is finite, but not whether it contains every possible string, is regular, is unambiguous, or is equivalent to a language with a different grammar. The following problems are undecidable for arbitrarily given context-free grammars A and B: The following problems are "decidable" for arbitrary context-free languages: According to Hopcroft, Motwani, Ullman (2003), many of the fundamental closure and (un)decidability properties of context-free languages were shown in the 1961 paper of Bar-Hillel, Perles, and Shamir Languages that are not context-free. The set formula_8 is a context-sensitive language, but there does not exist a context-free grammar generating this language. So there exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one may employ the pumping lemma for context-free languages or a number of other methods, such as Ogden's lemma or Parikh's theorem.
6868
27823944
https://en.wikipedia.org/wiki?curid=6868
Caffeine
Caffeine is a central nervous system (CNS) stimulant of the methylxanthine class and is the most commonly consumed psychoactive substance globally due to its widespread legality unlike most stimulants. It is mainly used for its eugeroic (wakefulness promoting), ergogenic (physical performance-enhancing), or nootropic (cognitive-enhancing) properties. Caffeine acts by blocking the binding of adenosine at a number of adenosine receptor types, inhibiting the centrally depressant effects of adenosine and enhancing the release of acetylcholine. Caffeine has a three-dimensional structure similar to that of adenosine, which allows it to bind and block its receptors. Caffeine also increases cyclic AMP levels through nonselective inhibition of phosphodiesterase, increases calcium release from intracellular stores, and antagonizes GABA receptors, although these mechanisms typically occur at concentrations beyond usual human consumption. Caffeine is a bitter, white crystalline purine, a methylxanthine alkaloid, and is chemically related to the adenine and guanine bases of deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). It is found in the seeds, fruits, nuts, or leaves of a number of plants native to Africa, East Asia, and South America and helps to protect them against herbivores and from competition by preventing the germination of nearby seeds, as well as encouraging consumption by select animals such as honey bees. The most common sources of caffeine for human consumption are the tea leaves of the "Camellia sinensis" plant and the coffee bean, the seed of the "Coffea" plant. Some people drink beverages containing caffeine to relieve or prevent drowsiness and to improve cognitive performance. To make these drinks, caffeine is extracted by steeping the plant product in water, a process called infusion. Caffeine-containing drinks, such as tea, coffee, and cola, are consumed globally in high volumes. In 2020, almost 10 million tonnes of coffee beans were consumed globally. Caffeine is the world's most widely consumed psychoactive drug. Unlike most other psychoactive substances, caffeine remains largely unregulated and legal in nearly all parts of the world. Caffeine is also an outlier as its use is seen as socially acceptable in most cultures and is encouraged in some. Caffeine has both positive and negative health effects. It can treat and prevent the premature infant breathing disorders bronchopulmonary dysplasia of prematurity and apnea of prematurity. Caffeine citrate is on the WHO Model List of Essential Medicines. It may confer a modest protective effect against some diseases, including Parkinson's disease. Some people experience sleep disruption or anxiety if they consume caffeine, but others show little disturbance. Evidence of a risk during pregnancy is equivocal; some authorities recommend that pregnant women limit caffeine to the equivalent of two cups of coffee per day or less. Caffeine can produce a mild form of drug dependence – associated with withdrawal symptoms such as sleepiness, headache, and irritability – when an individual stops using caffeine after repeated daily intake. Tolerance to the autonomic effects of increased blood pressure, heart rate, and urine output, develops with chronic use (i.e., these symptoms become less pronounced or do not occur following consistent use). Caffeine is classified by the U.S. Food and Drug Administration (FDA) as generally recognized as safe. Toxic doses, over 10 grams per day for an adult, greatly exceed the typical dose of under 500 milligrams per day. The European Food Safety Authority reported that up to 400 mg of caffeine per day (around 5.7 mg/kg of body mass per day) does not raise safety concerns for non-pregnant adults, while intakes up to 200 mg per day for pregnant and lactating women do not raise safety concerns for the fetus or the breast-fed infants. A cup of coffee contains 80–175 mg of caffeine, depending on what "bean" (seed) is used, how it is roasted, and how it is prepared (e.g., drip, percolation, or espresso). Thus roughly 50–100 ordinary cups of coffee would be required to reach the toxic dose. However, pure powdered caffeine, which is available as a dietary supplement, can be lethal in tablespoon-sized amounts. Uses. Medical. Caffeine is used for both prevention and treatment of bronchopulmonary dysplasia in premature infants. It may improve weight gain during therapy and reduce the incidence of cerebral palsy as well as reduce language and cognitive delay. On the other hand, subtle long-term side effects are possible. Caffeine is used as a primary treatment for apnea of prematurity, but not prevention. It is also used for orthostatic hypotension treatment. Some people use caffeine-containing beverages such as coffee or tea to try to treat their asthma. Evidence to support this practice is poor. It appears that caffeine in low doses improves airway function in people with asthma, increasing forced expiratory volume (FEV1) by 5% to 18% for up to four hours. The addition of caffeine (100–130 mg) to commonly prescribed pain relievers such as paracetamol or ibuprofen modestly improves the proportion of people who achieve pain relief. Consumption of caffeine after abdominal surgery shortens the time to recovery of normal bowel function and shortens length of hospital stay. Caffeine was formerly used as a second-line treatment for ADHD. It is considered less effective than methylphenidate or amphetamine but more so than placebo for children with ADHD. Children, adolescents, and adults with ADHD are more likely to consume caffeine, perhaps as a form of self-medication. Enhancing performance. Cognitive performance. Caffeine is a central nervous system stimulant that may reduce fatigue and drowsiness. At normal doses, caffeine has variable effects on learning and memory, but it generally improves reaction time, wakefulness, concentration, and motor coordination. The amount of caffeine needed to produce these effects varies from person to person, depending on body size and degree of tolerance. The desired effects arise approximately one hour after consumption, and the desired effects of a moderate dose usually subside after about three or four hours. Caffeine can delay or prevent sleep and improves task performance during sleep deprivation. Shift workers who use caffeine make fewer mistakes that could result from drowsiness. Caffeine in a dose dependent manner increases alertness in both fatigued and normal individuals. A systematic review and meta-analysis from 2014 found that concurrent caffeine and -theanine use has synergistic psychoactive effects that promote alertness, attention, and task switching; these effects are most pronounced during the first hour post-dose. Physical performance. Caffeine is a proven ergogenic aid in humans. Caffeine improves athletic performance in aerobic (especially endurance sports) and anaerobic conditions. Moderate doses of caffeine (around 5 mg/kg) can improve sprint performance, cycling and running time trial performance, endurance (i.e., it delays the onset of muscle fatigue and central fatigue), and cycling power output. Caffeine increases basal metabolic rate in adults. Caffeine ingestion prior to aerobic exercise increases fat oxidation, particularly in persons with low physical fitness. Caffeine improves muscular strength and power, and may enhance muscular endurance. Caffeine also enhances performance on anaerobic tests. Caffeine consumption before constant load exercise is associated with reduced perceived exertion. While this effect is not present during exercise-to-exhaustion exercise, performance is significantly enhanced. This is congruent with caffeine reducing perceived exertion, because exercise-to-exhaustion should end at the same point of fatigue. Caffeine also improves power output and reduces time to completion in aerobic time trials, an effect positively (but not exclusively) associated with longer duration exercise. Specific populations. Adults. For the general population of healthy adults, Health Canada advises a daily intake of no more than 400 mg. This limit was found to be safe by a 2017 systematic review on caffeine toxicology. Children. In healthy children, moderate caffeine intake under 400 mg produces effects that are "modest and typically innocuous". As early as six months old, infants can metabolize caffeine at the same rate as that of adults. Higher doses of caffeine (>400 mg) can cause physiological, psychological and behavioral harm, particularly for children with psychiatric or cardiac conditions. There is no evidence that coffee stunts a child's growth. The American Academy of Pediatrics recommends that caffeine consumption, particularly in the case of energy and sports drinks, is not appropriate for children and adolescents and should be avoided. This recommendation is based on a clinical report released by American Academy of Pediatrics in 2011 with a review of 45 publications from 1994 to 2011 and includes inputs from various stakeholders (Pediatricians, Committee on nutrition, Canadian Pediatric Society, Centers for Disease Control & Prevention, Food and Drug Administration, Sports Medicine & Fitness committee, National Federations of High School Associations). For children age 12 and under, Health Canada recommends a maximum daily caffeine intake of no more than 2.5 milligrams per kilogram of body weight. Based on average body weights of children, this translates to the following age-based intake limits: Adolescents. Health Canada has not developed advice for adolescents because of insufficient data. However, they suggest that daily caffeine intake for this age group be no more than 2.5 mg/kg body weight. This is because the maximum adult caffeine dose may not be appropriate for light-weight adolescents or for younger adolescents who are still growing. The daily dose of 2.5 mg/kg body weight would not cause adverse health effects in the majority of adolescent caffeine consumers. This is a conservative suggestion since older and heavier-weight adolescents may be able to consume adult doses of caffeine without experiencing adverse effects. Pregnancy and breastfeeding. The metabolism of caffeine is reduced in pregnancy, especially in the third trimester, and the half-life of caffeine during pregnancy can be increased up to 15 hours (as compared to 2.5 to 4.5 hours in non-pregnant adults). Evidence regarding the effects of caffeine on pregnancy and for breastfeeding are inconclusive. There is limited primary and secondary advice for, or against, caffeine use during pregnancy and its effects on the fetus or newborn. The UK Food Standards Agency has recommended that pregnant women should limit their caffeine intake, out of prudence, to less than 200 mg of caffeine a day – the equivalent of two cups of instant coffee, or one and a half to two cups of fresh coffee. The American Congress of Obstetricians and Gynecologists (ACOG) concluded in 2010 that caffeine consumption is safe up to 200 mg per day in pregnant women. For women who breastfeed, are pregnant, or may become pregnant, Health Canada recommends a maximum daily caffeine intake of no more than 300 mg, or a little over two 8 oz (237 mL) . A 2017 systematic review on caffeine toxicology found evidence supporting that caffeine consumption up to 300 mg/day for pregnant women is generally not associated with adverse reproductive or developmental effect. There are conflicting reports in the scientific literature about caffeine use during pregnancy. A 2011 review found that caffeine during pregnancy does not appear to increase the risk of congenital malformations, miscarriage or growth retardation even when consumed in moderate to high amounts. Other reviews, however, concluded that there is some evidence that higher caffeine intake by pregnant women may be associated with a higher risk of giving birth to a low birth weight baby, and may be associated with a higher risk of pregnancy loss. A systematic review, analyzing the results of observational studies, suggests that women who consume large amounts of caffeine (greater than 300 mg/day) prior to becoming pregnant may have a higher risk of experiencing pregnancy loss. Adverse effects. Physiological. Caffeine in coffee and other caffeinated drinks can affect gastrointestinal motility and gastric acid secretion. In postmenopausal women, high caffeine consumption can accelerate bone loss. Caffeine, alongside other factors such as stress and fatigue, can also increase the pressure in various muscles, including the eyelids. Acute ingestion of caffeine in large doses (at least 250–300 mg, equivalent to the amount found in 2–3 or 5–8 ) results in a short-term stimulation of urine output in individuals who have been deprived of caffeine for a period of days or weeks. This increase is due to both a diuresis (increase in water excretion) and a natriuresis (increase in saline excretion); it is mediated via proximal tubular adenosine receptor blockade. The acute increase in urinary output may increase the risk of dehydration. However, chronic users of caffeine develop a tolerance to this effect and experience no increase in urinary output. Psychological. Minor undesired symptoms from caffeine ingestion not sufficiently severe to warrant a psychiatric diagnosis are common and include mild anxiety, jitteriness, insomnia, increased sleep latency, and reduced coordination. Caffeine can have negative effects on anxiety disorders. According to a 2011 literature review, caffeine use may induce anxiety and panic disorders in people with Parkinson's disease. At high doses, typically greater than 300 mg, caffeine can both cause and worsen anxiety. For some people, discontinuing caffeine use can significantly reduce anxiety. In moderate doses, caffeine has been associated with reduced symptoms of depression and lower suicide risk. Two reviews indicate that increased consumption of coffee and caffeine may reduce the risk of depression. Some textbooks state that caffeine is a mild euphoriant, while others state that it is not a euphoriant. Caffeine-induced anxiety disorder is a subclass of the DSM-5 diagnosis of substance/medication-induced anxiety disorder. Reinforcement disorders. Addiction. Whether caffeine can result in an addictive disorder depends on how addiction is defined. Compulsive caffeine consumption under any circumstances has not been observed, and caffeine is therefore not generally considered addictive. Some diagnostic sources, such as the and ICD-10, include a classification of caffeine addiction under a broader diagnostic model. Some state that certain users can become addicted and therefore unable to decrease use even though they know there are negative health effects. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, with people preferring placebo over caffeine in a study on drug abuse liability published in an NIDA research monograph. Some state that research does not provide support for an underlying biochemical mechanism for caffeine addiction. Other research states it can affect the reward system. "Caffeine addiction" was added to the ICDM-9 and ICD-10. However, its addition was contested with claims that this diagnostic model of caffeine addiction is not supported by evidence. The American Psychiatric Association's does not include the diagnosis of a "caffeine addiction" but proposes criteria for the disorder for more study. Dependence and withdrawal. Caffeine withdrawal can cause mild to clinically significant distress or impairment in daily functioning. The frequency at which this occurs is self-reported at 11%, but in lab tests only half of the people who report withdrawal actually experience it, casting doubt on many claims of dependence. and most cases of caffeine withdrawal were 13% in the moderate sense. Moderately physical dependence and withdrawal symptoms may occur upon abstinence, with greater than 100 mg caffeine per day, although these symptoms last no longer than a day. Some symptoms associated with psychological dependence may also occur during withdrawal. The diagnostic criteria for caffeine withdrawal require a previous prolonged daily use of caffeine. Following 24 hours of a marked reduction in consumption, a minimum of 3 of these signs or symptoms is required to meet withdrawal criteria: difficulty concentrating, depressed mood/irritability, flu-like symptoms, headache, and fatigue. Additionally, the signs and symptoms must disrupt important areas of functioning and are not associated with effects of another condition. The ICD-11 includes caffeine dependence as a distinct diagnostic category, which closely mirrors the DSM-5's proposed set of criteria for "caffeine-use disorder".  Caffeine use disorder refers to dependence on caffeine characterized by failure to control caffeine consumption despite negative physiological consequences. The APA, which published the DSM-5, acknowledged that there was sufficient evidence in order to create a diagnostic model of caffeine dependence for the DSM-5, but they noted that the clinical significance of the disorder is unclear. Due to this inconclusive evidence on clinical significance, the DSM-5 classifies caffeine-use disorder as a "condition for further study". Tolerance to the effects of caffeine occurs for caffeine-induced elevations in blood pressure and the subjective feelings of nervousness though the effects are not drastic. Sensitization, the process whereby effects become more prominent with use, may occur for positive effects such as feelings of alertness and wellbeing. Tolerance varies for daily, regular caffeine users and high caffeine users. High doses of caffeine (750 to 1200 mg/day spread throughout the day) have been shown to produce complete tolerance to some, but not all of the effects of caffeine. Doses as low as 100 mg/day, such as a cup of coffee or two to three servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not. Risk of other diseases. A neuroprotective effect of caffeine against Alzheimer's disease and dementia is possible but the evidence is inconclusive. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. One meta analysis has found that caffeine consumption is associated with a reduced risk of type 2 diabetes. Regular caffeine consumption may reduce the risk of developing Parkinson's disease and may slow the progression of Parkinson's disease. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. The DSM-5 also includes other caffeine-induced disorders consisting of caffeine-induced anxiety disorder, caffeine-induced sleep disorder and unspecified caffeine-related disorders. The first two disorders are classified under "Anxiety Disorder" and "Sleep-Wake Disorder" because they share similar characteristics. Other disorders that present with significant distress and impairment of daily functioning that warrant clinical attention but do not meet the criteria to be diagnosed under any specific disorders are listed under "Unspecified Caffeine-Related Disorders". Energy crash. Caffeine is reputed to cause a fall in energy several hours after drinking, but this is not well researched. Overdose. Consumption of per day is associated with a condition known as "<dfn>caffeinism</dfn>." Caffeinism usually combines caffeine dependency with a wide range of unpleasant symptoms including nervousness, irritability, restlessness, insomnia, headaches, and palpitations after caffeine use. Caffeine overdose can result in a state of central nervous system overstimulation known as caffeine intoxication, a clinically significant temporary condition that develops during, or shortly after, the consumption of caffeine. This syndrome typically occurs only after ingestion of large amounts of caffeine, well over the amounts found in typical caffeinated beverages and caffeine tablets (e.g., more than 400–500 mg at a time). According to the DSM-5, caffeine intoxication may be diagnosed if five (or more) of the following symptoms develop after recent consumption of caffeine: restlessness, nervousness, excitement, insomnia, flushed face, diuresis, gastrointestinal disturbance, muscle twitching, rambling flow of thought and speech, tachycardia or cardiac arrhythmia, periods of inexhaustibility, and psychomotor agitation. According to the International Classification of Diseases (ICD-11), cases of very high caffeine intake (e.g. > 5 g) may result in caffeine intoxication with symptoms including mania, depression, lapses in judgment, disorientation, disinhibition, delusions, hallucinations or psychosis, and rhabdomyolysis. Energy drinks. High caffeine consumption in energy drinks (at least one liter or 320 mg of caffeine) was associated with short-term cardiovascular side effects including hypertension, prolonged QT interval, and heart palpitations. These cardiovascular side effects were not seen with smaller amounts of caffeine consumption in energy drinks (less than 200 mg). Severe intoxication. there is no known antidote or reversal agent for caffeine intoxication. Treatment of mild caffeine intoxication is directed toward symptom relief; severe intoxication may require peritoneal dialysis, hemodialysis, or hemofiltration. Intralipid infusion therapy is indicated in cases of imminent risk of cardiac arrest in order to scavenge the free serum caffeine. Lethal dose. Death from caffeine ingestion appears to be rare, and most commonly caused by an intentional overdose of medications. In 2016, 3702 caffeine-related exposures were reported to Poison Control Centers in the United States, of which 846 required treatment at a medical facility, and 16 had a major outcome; and several caffeine-related deaths are reported in case studies. The LD50 of caffeine in rats is 192 milligrams per kilogram of body mass. The fatal dose in humans is estimated to be 150–200 milligrams per kilogram, which is 10.5–14 grams for a typical adult, equivalent to about 75–100 . There are cases where doses as low as 57 milligrams per kilogram have been fatal. A number of fatalities have been caused by overdoses of readily available powdered caffeine supplements, for which the estimated lethal amount is less than a tablespoon. The lethal dose is lower in individuals whose ability to metabolize caffeine is impaired due to genetics or chronic liver disease. A death was reported in 2013 of a man with liver cirrhosis who overdosed on caffeinated mints. Interactions. Caffeine is a substrate for CYP1A2, and interacts with many substances through this and other mechanisms. Alcohol. According to DSST, alcohol causes a decrease in performance on their standardized tests, and caffeine causes a significant improvement. When alcohol and caffeine are consumed jointly, the effects of the caffeine are changed, but the alcohol effects remain the same. For example, consuming additional caffeine does not reduce the effect of alcohol. However, the jitteriness and alertness given by caffeine is decreased when additional alcohol is consumed. Alcohol consumption alone reduces both inhibitory and activational aspects of behavioral control. Caffeine antagonizes the effect of alcohol on the activational aspect of behavioral control, but has no effect on the inhibitory behavioral control. The Dietary Guidelines for Americans recommend avoidance of concomitant consumption of alcohol and caffeine, as taking them together may lead to increased alcohol consumption, with a higher risk of alcohol-associated injury. Smoking. Smoking tobacco has been shown to increase caffeine clearance by 56% as a result of polycyclic aromatic hydrocarbons inducing the CYP1A2 enzyme. The CYP1A2 enzyme that is induced by smoking is responsible for the metabolism of caffeine; increased enzyme activity leads to increased caffeine clearance, and is associated with greater coffee consumption for regular smokers. Birth control. Birth control pills can extend the half-life of caffeine by as much as 40%, requiring greater attention to caffeine consumption. Medications. Caffeine sometimes increases the effectiveness of some medications, such as those for headaches. Caffeine was determined to increase the potency of some over-the-counter analgesic medications by 40%. The pharmacological effects of adenosine may be blunted in individuals taking large quantities of methylxanthines like caffeine. Some other examples of methylxanthines include the medications theophylline and aminophylline, which are prescribed to relieve symptoms of asthma or COPD. Pharmacology. Pharmacodynamics. In the absence of caffeine and when a person is awake and alert, little adenosine is present in CNS neurons. With a continued wakeful state, over time adenosine accumulates in the neuronal synapse, in turn binding to and activating adenosine receptors found on certain CNS neurons; when activated, these receptors produce a cellular response that ultimately increases drowsiness. When caffeine is consumed, it antagonizes adenosine receptors; in other words, caffeine prevents adenosine from activating the receptor by blocking the location on the receptor where adenosine binds to it. As a result, caffeine temporarily prevents or relieves drowsiness, and thus maintains or restores alertness. Receptor and ion channel targets. Caffeine is an antagonist of adenosine A2A receptors, and knockout mouse studies have specifically implicated antagonism of the A2A receptor as responsible for the wakefulness-promoting effects of caffeine. Antagonism of A2A receptors in the ventrolateral preoptic area (VLPO) reduces inhibitory GABA neurotransmission to the tuberomammillary nucleus, a histaminergic projection nucleus that activation-dependently promotes arousal. This disinhibition of the tuberomammillary nucleus is the downstream mechanism by which caffeine produces wakefulness-promoting effects. Caffeine is an antagonist of all four adenosine receptor subtypes (A1, A2A, A2B, and A3), although with varying potencies. The affinity (KD) values of caffeine for the human adenosine receptors are 12 μM at A1, 2.4 μM at A2A, 13 μM at A2B, and 80 μM at A3. Antagonism of adenosine receptors by caffeine also stimulates the medullary vagal, vasomotor, and respiratory centers, which increases respiratory rate, reduces heart rate, and constricts blood vessels. Adenosine receptor antagonism also promotes neurotransmitter release (e.g., monoamines and acetylcholine), which endows caffeine with its stimulant effects; adenosine acts as an inhibitory neurotransmitter that suppresses activity in the central nervous system. Heart palpitations are caused by blockade of the A1 receptor. Because caffeine is both water- and lipid-soluble, it readily crosses the blood–brain barrier that separates the bloodstream from the interior of the brain. Once in the brain, the principal mode of action is as a nonselective antagonist of adenosine receptors (in other words, an agent that reduces the effects of adenosine). The caffeine molecule is structurally similar to adenosine, and is capable of binding to adenosine receptors on the surface of cells without activating them, thereby acting as a competitive antagonist. In addition to its activity at adenosine receptors, caffeine is an inositol trisphosphate receptor 1 antagonist and a voltage-independent activator of the ryanodine receptors (RYR1, RYR2, and RYR3). It is also a competitive antagonist of the ionotropic glycine receptor. Effects on striatal dopamine. While caffeine does not directly bind to any dopamine receptors, it influences the binding activity of dopamine at its receptors in the striatum by binding to adenosine receptors that have formed GPCR heteromers with dopamine receptors, specifically the A1–D1 receptor heterodimer (this is a receptor complex with one adenosine A1 receptor and one dopamine D1 receptor) and the A2A–D2 receptor heterotetramer (this is a receptor complex with two adenosine A2A receptors and two dopamine D2 receptors). The A2A–D2 receptor heterotetramer has been identified as a primary pharmacological target of caffeine, primarily because it mediates some of its psychostimulant effects and its pharmacodynamic interactions with dopaminergic psychostimulants. Caffeine also causes the release of dopamine in the dorsal striatum and nucleus accumbens core (a substructure within the ventral striatum), but not the nucleus accumbens shell, by antagonizing A1 receptors in the axon terminal of dopamine neurons and A1–A2A heterodimers (a receptor complex composed of one adenosine A1 receptor and one adenosine A2A receptor) in the axon terminal of glutamate neurons. During chronic caffeine use, caffeine-induced dopamine release within the nucleus accumbens core is markedly reduced due to drug tolerance. Enzyme targets. Caffeine, like other xanthines, also acts as a phosphodiesterase inhibitor. As a competitive nonselective phosphodiesterase inhibitor, caffeine raises intracellular cyclic AMP, activates protein kinase A, inhibits TNF-alpha and leukotriene synthesis, and reduces inflammation and innate immunity. Caffeine also affects the cholinergic system where it is a moderate inhibitor of the enzyme acetylcholinesterase. Pharmacokinetics. Caffeine from coffee or other beverages is absorbed by the small intestine within 45 minutes of ingestion and distributed throughout all bodily tissues. Peak blood concentration is reached within 1–2 hours. It is eliminated by first-order kinetics. Caffeine can also be absorbed rectally, evidenced by suppositories of ergotamine tartrate and caffeine (for the relief of migraine) and of chlorobutanol and caffeine (for the treatment of hyperemesis). However, rectal absorption is less efficient than oral: the maximum concentration (Cmax) and total amount absorbed (AUC) are both about 30% (i.e., 1/3.5) of the oral amounts. Caffeine's biological half-life – the time required for the body to eliminate one-half of a dose – varies widely among individuals according to factors such as pregnancy, other drugs, liver enzyme function level (needed for caffeine metabolism) and age. In healthy adults, caffeine's half-life is between 3 and 7 hours. The half-life is decreased by 30–50% in adult male smokers, approximately doubled in women taking oral contraceptives, and prolonged in the last trimester of pregnancy. In newborns the half-life can be 80 hours or more, dropping rapidly with age, possibly to less than the adult value by age 6 months. The antidepressant fluvoxamine (Luvox) reduces the clearance of caffeine by more than 90%, and increases its elimination half-life more than tenfold, from 4.9 hours to 56 hours. Caffeine is metabolized in the liver by the cytochrome P450 oxidase enzyme system (particularly by the CYP1A2 isozyme) into three dimethylxanthines, each of which has its own effects on the body: 1,3,7-Trimethyluric acid is a minor caffeine metabolite. 7-Methylxanthine is also a metabolite of caffeine. Each of the above metabolites is further metabolized and then excreted in the urine. Caffeine can accumulate in individuals with severe liver disease, increasing its half-life. A 2011 review found that increased caffeine intake was associated with a variation in two genes that increase the rate of caffeine catabolism. Subjects who had this mutation on both chromosomes consumed 40 mg more caffeine per day than others. This is presumably due to the need for a higher intake to achieve a comparable desired effect, not that the gene led to a disposition for greater incentive of habituation. Chemistry. Pure anhydrous caffeine is a bitter-tasting, white, odorless powder with a melting point of 235–238 °C. Caffeine is moderately soluble in water at room temperature (2 g/100 mL), but quickly soluble in boiling water (66 g/100 mL). It is also moderately soluble in ethanol (1.5 g/100 mL). It is weakly basic (pKa of conjugate acid = ~0.6) requiring strong acid to protonate it. Caffeine does not contain any stereogenic centers and hence is classified as an achiral molecule. The xanthine core of caffeine contains two fused rings, a pyrimidinedione and imidazole. The pyrimidinedione in turn contains two amide functional groups that exist predominantly in a zwitterionic resonance the location from which the nitrogen atoms are double bonded to their adjacent amide carbons atoms. Hence all six of the atoms within the pyrimidinedione ring system are sp2 hybridized and planar. The imidazole ring also has a resonance. Therefore, the fused 5,6 ring core of caffeine contains a total of ten pi electrons and hence according to Hückel's rule is aromatic. Synthesis. The biosynthesis of caffeine is an example of convergent evolution among different species. Caffeine may be synthesized in the lab starting with 1,3-dimethylurea and malonic acid. Production of synthesized caffeine largely takes place in pharmaceutical plants in China. Synthetic and natural caffeine are chemically identical and nearly indistinguishable. The primary distinction is that synthetic caffeine is manufactured from urea and chloroacetic acid, while natural caffeine is extracted from plant sources, a process known as decaffeination. Despite the different production methods, the final product and its effects on the body are identical. Research on synthetic caffeine supports that it has the same stimulating effects on the body as natural caffeine. And although many claim that natural caffeine is absorbed slower and therefore leads to a gentler caffeine crash, there is little scientific evidence supporting the notion. Decaffeination. Germany, the birthplace of decaffeinated coffee, is home to several decaffeination plants, including the world's largest, Coffein Compagnie. Over half of the decaf coffee sold in the U.S. first travels from the tropics to Germany for caffeine removal before making its way to American consumers. Extraction of caffeine from coffee, to produce caffeine and decaffeinated coffee, can be performed using various solvents. Following are main methods: "Decaffeinated" coffees do in fact contain caffeine in many cases – some commercially available decaffeinated coffee products contain considerable levels. One study found that decaffeinated coffee contained 10 mg of caffeine per cup, compared to approximately 85 mg of caffeine per cup for regular coffee. Detection in body fluids. Caffeine can be quantified in blood, plasma, or serum to monitor therapy in neonates, confirm a diagnosis of poisoning, or facilitate a medicolegal death investigation. Plasma caffeine levels are usually in the range of 2–10 mg/L in coffee drinkers, 12–36 mg/L in neonates receiving treatment for apnea, and 40–400 mg/L in victims of acute overdosage. Urinary caffeine concentration is frequently measured in competitive sports programs, for which a level in excess of 15 mg/L is usually considered to represent abuse. Analogs. Some analog substances have been created which mimic caffeine's properties with either function or structure or both. Of the latter group are the xanthines DMPX and 8-chlorotheophylline, which is an ingredient in dramamine. Members of a class of nitrogen substituted xanthines are often proposed as potential alternatives to caffeine. Many other xanthine analogues constituting the adenosine receptor antagonist class have also been elucidated. Some other caffeine analogs: Precipitation of tannins. Caffeine, as do other alkaloids such as cinchonine, quinine or strychnine, precipitates polyphenols and tannins. This property can be used in a quantitation method. Natural occurrence. Around thirty plant species are known to contain caffeine. Common sources are the "beans" (seeds) of the two cultivated coffee plants, "Coffea arabica" and "Coffea canephora" (the quantity varies, but 1.3% is a typical value); and of the cocoa plant, "Theobroma cacao"; the leaves of the tea plant; and kola nuts. Other sources include the leaves of yaupon holly, South American holly yerba mate, and Amazonian holly guayusa; and seeds from Amazonian maple guarana berries. Temperate climates around the world have produced unrelated caffeine-containing plants. Caffeine in plants acts as a natural pesticide: it can paralyze and kill predator insects feeding on the plant. High caffeine levels are found in coffee seedlings when they are developing foliage and lack mechanical protection. In addition, high caffeine levels are found in the surrounding soil of coffee seedlings, which inhibits seed germination of nearby coffee seedlings, thus giving seedlings with the highest caffeine levels fewer competitors for existing resources for survival. Caffeine is stored in tea leaves in two places. Firstly, in the cell vacuoles where it is complexed with polyphenols. This caffeine probably is released into the mouth parts of insects, to discourage herbivory. Secondly, around the vascular bundles, where it probably inhibits pathogenic fungi from entering and colonizing the vascular bundles. Caffeine in nectar may improve the reproductive success of the pollen producing plants by enhancing the reward memory of pollinators such as honey bees. The differing perceptions in the effects of ingesting beverages made from various plants containing caffeine could be explained by the fact that these beverages also contain varying mixtures of other methylxanthine alkaloids, including the cardiac stimulants theophylline and theobromine, and polyphenols that can form insoluble complexes with caffeine. Products. Products containing caffeine include coffee, tea, soft drinks ("colas"), energy drinks, other beverages, chocolate, caffeine tablets, other oral products, and inhalation products. According to a 2020 study in the United States, coffee is the major source of caffeine intake in middle-aged adults, while soft drinks and tea are the major sources in adolescents. Energy drinks are more commonly consumed as a source of caffeine in adolescents as compared to adults. Beverages. Coffee. The world's primary source of caffeine is the coffee "bean" (the seed of the coffee plant), from which coffee is brewed. Caffeine content in coffee varies widely depending on the type of coffee bean and the method of preparation used; even beans within a given bush can show variations in concentration. In general, one serving of coffee ranges from 80 to 100 milligrams, for a single shot (30 milliliters) of arabica-variety espresso, to approximately 100–125 milligrams for a cup (120 milliliters) of drip coffee. "Arabica" coffee typically contains half the caffeine of the "robusta" variety. In general, dark-roast coffee has slightly less caffeine than lighter roasts because the roasting process reduces caffeine content of the bean by a small amount. Tea. Tea contains more caffeine than coffee by dry weight. A typical serving, however, contains much less, since less of the product is used as compared to an equivalent serving of coffee. Also contributing to caffeine content are growing conditions, processing techniques, and other variables. Thus, teas contain varying amounts of caffeine. Tea contains small amounts of theobromine and slightly higher levels of theophylline than coffee. Preparation and many other factors have a significant impact on tea, and color is a poor indicator of caffeine content. Teas like the pale Japanese green tea, "gyokuro", for example, contain far more caffeine than much darker teas like "lapsang souchong", which has minimal caffeine content. Soft drinks and energy drinks. Caffeine is also a common ingredient of soft drinks, such as cola, originally prepared from kola nuts. Soft drinks typically contain 0 to 55 milligrams of caffeine per 12 ounce () serving. By contrast, energy drinks, such as Red Bull, can start at 80 milligrams of caffeine per serving. The caffeine in these drinks either originates from the ingredients used or is an additive derived from the product of decaffeination or from chemical synthesis. Guarana, a primary ingredient of energy drinks, contains large amounts of caffeine with small amounts of theobromine and theophylline in a naturally occurring slow-release excipient. Cacao solids. Cocoa solids (derived from cocoa bean) contain 230 mg caffeine per 100 g. The caffeine content varies between cocoa bean strains. Caffeine content mg/g (sorted by lowest caffeine content): Chocolate. Caffeine per 100 g: The stimulant effect of chocolate may be due to a combination of theobromine and theophylline, as well as caffeine. Tablets. Tablets offer several advantages over coffee, tea, and other caffeinated beverages, including convenience, known dosage, and avoidance of concomitant intake of sugar, acids, and fluids. The use of caffeine in this form is said to improve mental alertness. These tablets are commonly used by students studying for their exams and by people who work or drive for long hours. Other oral products. One U.S. company is marketing oral dissolvable caffeine strips. Another intake route is SpazzStick, a caffeinated lip balm. Alert Energy Caffeine Gum was introduced in the United States in 2013, but was voluntarily withdrawn after an announcement of an investigation by the FDA of the health effects of added caffeine in foods. There is weak evidence that the use of caffeine mouth washes might help cognitive performance. Inhalants. Similar to an e-cigarette, a caffeine inhaler may be used to deliver caffeine or a stimulant like guarana by vaping. In 2012, the FDA sent a warning letter to one of the companies marketing an inhaler, expressing concerns for the lack of safety information available about inhaled caffeine. History. Discovery and spread of use. According to Chinese legend, the Chinese emperor Shennong, reputed to have reigned in about 3000 BCE, inadvertently discovered tea when he noted that when certain leaves fell into boiling water, a fragrant and restorative drink resulted. Shennong is also mentioned in Lu Yu's "Cha Jing", a famous early work on the subject of tea. The earliest credible evidence of either coffee drinking or knowledge of the coffee plant appears in the middle of the fifteenth century, in the Sufi monasteries of the Yemen in southern Arabia. From Mokha, coffee spread to Egypt and North Africa, and by the 16th century, it had reached the rest of the Middle East, Persia and Turkey. From the Middle East, coffee drinking spread to Italy, then to the rest of Europe, and coffee plants were transported by the Dutch to the East Indies and to the Americas. Kola nut use appears to have ancient origins. It is chewed in many West African cultures, in both private and social settings, to restore vitality and ease hunger pangs. The earliest evidence of cocoa bean use comes from residue found in an ancient Mayan pot dated to 600 BCE. Also, chocolate was consumed in a bitter and spicy drink called "xocolatl", often seasoned with vanilla, chile pepper, and achiote. "Xocolatl" was believed to fight fatigue, a belief probably attributable to the theobromine and caffeine content. Chocolate was an important luxury good throughout pre-Columbian Mesoamerica, and cocoa beans were often used as currency. "Xocolatl" was introduced to Europe by the Spaniards, and became a popular beverage by 1700. The Spaniards also introduced the cacao tree into the West Indies and the Philippines. The leaves and stems of the yaupon holly ("Ilex vomitoria") were used by Native Americans to brew a tea called "asi" or the "black drink". Archaeologists have found evidence of this use far into antiquity, possibly dating to Late Archaic times. Chemical identification, isolation, and synthesis. In 1819, the German chemist Friedlieb Ferdinand Runge isolated caffeine for the first time; he called it "Kaffebase" (i.e., a base that exists in coffee). In 1821, caffeine was isolated both by the French chemist Pierre Jean Robiquet and by another pair of French chemists, Pierre-Joseph Pelletier and Joseph Bienaimé Caventou, according to Swedish chemist Jöns Jacob Berzelius in his yearly journal. Furthermore, Berzelius stated that the French chemists had made their discoveries independently of any knowledge of Runge's or each other's work. However, Berzelius later acknowledged Runge's priority in the extraction of caffeine, stating: "However, at this point, it should not remain unmentioned that Runge (in his "Phytochemical Discoveries", 1820, pages 146–147) specified the same method and described caffeine under the name "Caffeebase" a year earlier than Robiquet, to whom the discovery of this substance is usually attributed, having made the first oral announcement about it at a meeting of the Pharmacy Society in Paris." Pelletier's article on caffeine was the first to use the term in print (in the French form from the French word for coffee: ""). It corroborates Berzelius's account: Robiquet was one of the first to isolate and describe the properties of pure caffeine, whereas Pelletier was the first to perform an elemental analysis. In 1827, M. Oudry isolated "théine" from tea, but in 1838 it was proved by Mulder and by Carl Jobst that theine was actually the same as caffeine. In 1895, German chemist Hermann Emil Fischer (1852–1919) first synthesized caffeine from its chemical components (i.e. a "total synthesis"), and two years later, he also derived the structural formula of the compound. This was part of the work for which Fischer was awarded the Nobel Prize in 1902. Historic regulations. Because it was recognized that coffee contained some compound that acted as a stimulant, first coffee and later also caffeine has sometimes been subject to regulation. For example, in the 16th century Islamists in Mecca and in the Ottoman Empire made coffee illegal for some classes. Charles II of England tried to ban it in 1676, Frederick II of Prussia banned it in 1777, and coffee was banned in Sweden at various times between 1756 and 1823. In 1911, caffeine became the focus of one of the earliest documented health scares, when the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging the caffeine in its drink was "injurious to health". Although the Supreme Court later ruled in favor of Coca-Cola in "United States v. Forty Barrels and Twenty Kegs of Coca-Cola", two bills were introduced to the U.S. House of Representatives in 1912 to amend the Pure Food and Drug Act, adding caffeine to the list of "habit-forming" and "deleterious" substances, which must be listed on a product's label. Society and culture. Regulations. United States. The US Food and Drug Administration (FDA) considers safe beverages containing less than 0.02% caffeine; but caffeine powder, which is sold as a dietary supplement, is unregulated. It is a regulatory requirement that the label of most prepackaged foods must declare a list of ingredients, including food additives such as caffeine, in descending order of proportion. However, there is no regulatory provision for mandatory quantitative labeling of caffeine, (e.g., milligrams caffeine per stated serving size). There are a number of food ingredients that naturally contain caffeine. These ingredients must appear in food ingredient lists. However, as is the case for "food additive caffeine", there is no requirement to identify the quantitative amount of caffeine in composite foods containing ingredients that are natural sources of caffeine. While coffee or chocolate are broadly recognized as caffeine sources, some ingredients (e.g., guarana, yerba maté) are likely less recognized as caffeine sources. For these natural sources of caffeine, there is no regulatory provision requiring that a food label identify the presence of caffeine nor state the amount of caffeine present in the food. The FDA guidance was updated in 2018. Consumption. Global consumption of caffeine has been estimated at 120,000 tonnes per year, making it the world's most popular psychoactive substance. The consumption of caffeine has remained stable between 1997 and 2015. Coffee, tea and soft drinks are the most common caffeine sources, with energy drinks contributing little to the total caffeine intake across all age groups. Religions. The Seventh-day Adventist Church asked for its members to "abstain from caffeinated drinks", but has removed this from baptismal vows (while still recommending abstention as policy). Some from these religions believe that one is not supposed to consume a non-medical, psychoactive substance, or believe that one is not supposed to consume a substance that is addictive. The Church of Jesus Christ of Latter-day Saints has said the following with regard to caffeinated beverages: "... the Church revelation spelling out health practices (Doctrine and Covenants 89) does not mention the use of caffeine. The Church's health guidelines prohibit alcoholic drinks, smoking or chewing of tobacco, and 'hot drinks' – taught by Church leaders to refer specifically to tea and coffee." Gaudiya Vaishnavas generally also abstain from caffeine, because they believe it clouds the mind and overstimulates the senses. To be initiated under a guru, one must have had no caffeine, alcohol, nicotine or other drugs, for at least a year. Caffeinated beverages are widely consumed by Muslims. In the 16th century, some Muslim authorities made unsuccessful attempts to ban them as forbidden "intoxicating beverages" under Islamic dietary laws. Other organisms. The bacteria "Pseudomonas putida" CBB5 can live on pure caffeine and can cleave caffeine into carbon dioxide and ammonia. Caffeine is toxic to birds and to dogs and cats, and has a pronounced adverse effect on mollusks, various insects, and spiders. This is at least partly due to a poor ability to metabolize the compound, causing higher levels for a given dose per unit weight. Caffeine has also been found to enhance the reward memory of honey bees. Research. Caffeine has been used to double chromosomes in haploid wheat.
6874
88026
https://en.wikipedia.org/wiki?curid=6874
Cyc
Cyc (pronounced ) is a long-term artificial intelligence (AI) project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge. The project began in July 1984 at MCC and was developed later by the Cycorp company. The name "Cyc" (from "encyclopedia") is a registered trademark owned by Cycorp. CycL has a publicly released specification, and dozens of HL (Heuristic Level) modules were described in Lenat and Guha's textbook, but the Cyc inference engine code and the full list of HL modules are Cycorp-proprietary. History. The project began in July 1984 by Douglas Lenat as a project of the Microelectronics and Computer Technology Corporation (MCC), a research consortium started by two United States–based corporations "to counter a then ominous Japanese effort in AI, the so-called 'fifth-generation' project." The US passed the National Cooperative Research Act of 1984, which for the first time allowed US companies to "collude" on long-term research. Since January 1995, the project has been under active development by Cycorp, where Douglas Lenat was the CEO. The CycL representation language started as an extension of RLL (the Representation Language Language, developed in 1979–1980 by Lenat and his graduate student Russell Greiner while at Stanford University). In 1989, CycL had expanded in expressive power to higher-order logic (HOL). Cyc's ontology grew to about 100,000 terms in 1994, and as of 2017, it contained about 1,500,000 terms. The Cyc knowledge base involving ontological terms was largely created by hand axiom-writing; it was at about 1 million in 1994, and as of 2017, it is at about 24.5 million. In 2008, Cyc resources were mapped to many Wikipedia articles. Cyc is presently connected to Wikidata. Knowledge base. The knowledge base is divided into "microtheories". Unlike the knowledge base as a whole, each microtheory must be free from monotonic contradictions. Each microtheory is a first-class object in the Cyc ontology; it has a name that is a regular constant. The concept names in Cyc are CycL "terms" or "constants". Constants start with an optional codice_1 and are case-sensitive. There are constants for: For every instance of the collection codice_9 (i.e., for every chordate), there exists a female animal (instance of codice_10), which is its mother (described by the predicate codice_11). Inference engine. An inference engine is a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs general logical deduction. It also performs inductive reasoning, statistical machine learning and symbolic machine learning, and abductive reasoning. The Cyc inference engine separates the epistemological problem from the heuristic problem. For the latter, Cyc used a community-of-agents architecture in which specialized modules, each with its own algorithm, became prioritized if they could make progress on the sub-problem. Releases. OpenCyc. The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under the Apache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. The CycL and SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, without source code. It was made available for Linux and Microsoft Windows. The open source Texai project released the RDF-compatible content extracted from OpenCyc. The user interface was in Java 6. Cycorp was a participant of a working group for the Semantic Web, Standard Upper Ontology Working Group, which was active from 2001 to 2003. A Semantic Web version of OpenCyc was available starting in 2008, but ending sometime after 2016. OpenCyc 4.0 was released in June 2012. OpenCyc 4.0 contained 239,000 concepts and 2,093,000 facts; however, these are mainly taxonomic assertions. 4.0 was the last released version, and around March 2017, OpenCyc was shutdown for the purported reason that "because such “fragmenting” led to divergence, and led to confusion amongst its users and the technical community generally thought that OpenCyc fragment "was" Cyc.". ResearchCyc. In July 2006, Cycorp released the executable of ResearchCyc 1.0, a version of Cyc aimed at the research community, at no charge. (ResearchCyc was in beta stage of development during all of 2004; a beta version was released in February 2005.) In addition to the taxonomic information, ResearchCyc includes more semantic knowledge; it also includes a large lexicon, English parsing and generation tools, and Java-based interfaces for knowledge editing and querying. It contains a system for ontology-based data integration. Applications. In 2001, GlaxoSmithKline was funding the Cyc, though for unknown applications. In 2007, the Cleveland Clinic has used Cyc to develop a natural-language query interface of biomedical information on cardiothoracic surgeries. A query is parsed into a set of CycL fragments with open variables. The Terrorism Knowledge Base was an application of Cyc that tried to contain knowledge about "terrorist"-related descriptions. The knowledge is stored as statements in mathematical logic. The project lasted from 2004 to 2008. Lycos used Cyc for search term disambiguation, but stopped in 2001. CycSecure was produced in 2002, a network vulnerability assessment tool based on Cyc, with trials at the US STRATCOM Computer Emergency Response Team. One Cyc application has the stated aim to help students doing math at a 6th grade level. The application, called MathCraft, was supposed to play the role of a fellow student who is slightly more confused than the user about the subject. As the user gives good advice, Cyc allows the avatar to make fewer mistakes. Criticisms. The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history". Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project to IBM's Watson. Machine-learning scientist Pedro Domingos refers to the project as a "catastrophic failure" for the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own. Gary Marcus, a cognitive scientist and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news." This is consistent with Doug Lenat's position that "Sometimes the "veneer" of intelligence is not enough". Notable employees. This is a list of some of the notable people who work or have worked on Cyc either while it was a project at MCC (where Cyc was first started) or Cycorp.
6878
41745938
https://en.wikipedia.org/wiki?curid=6878
Carlos Valderrama
Carlos Alberto Valderrama Palacio ( ; born 2 September 1961), also known as "El Pibe" ("The Kid"), is a Colombian former professional footballer and sports commentator for Fútbol de Primera, who played as an attacking midfielder. Valderrama is considered by many to be one of the greatest South American players in history and one of the best players of his era. In 2004, he was named by Pelé in the FIFA 100 list of the world's greatest living players. A creative playmaker, he is regarded as one of the best Colombian footballers of all time, and by some, as Colombia's greatest player ever. His distinctive hairstyle, as well as his precise passing and technical skills made him one of South America's most recognisable footballers in the late 1980s and early 1990s. He won the South American Footballer of the Year award in 1987 and 1993, He is the fifth highest assister in the history of national teams and the twelfth overall, including clubs, and in 1999, he was also named one of the top 100 players of the 20th century by World Soccer. Valderrama was a member of the Colombia national football team from 1985 until 1998. He represented Colombia in 111 full internationals and scored 11 times, making him the second-most capped player in the country's history, behind only David Ospina. He played a major role during the golden era of Colombian football in the 1990s, representing his national side in three FIFA World Cups and five Copa América tournaments. After spending most of his career playing club football in South America and Europe, towards the end of his career Valderrama played in Major League Soccer, joining the league in its first season. One of the most recognisable players in the league at the time of its inception, he helped popularise the league during the second half of the 1990s. To this day, he is an icon and is considered one of the most decorated players to ever play in MLS; in 2005, he was named to the MLS All-Time Best XI. Club career. Colombia and Europe. Born in Santa Marta, Colombia, Valderrama began his career at Unión Magdalena of the Colombian First Division in 1981. He also later played for Millonarios in 1984. He joined Deportivo Cali in 1985, where he played most of his Colombian football. In 1988, he moved to the French First Division side Montpellier. He struggled to adapt to the less technical and the faster, more physical, and tactical brand of football being played in Europe, losing his place in the squad. However, his passing ability later saw him become the club's main creative force, and he played a decisive role as his side won the Coupe de France in 1990. In 1991, he remained in Europe and joined Spanish side Real Valladolid for a season. He then returned to Colombia in 1992 and went on to play for Independiente Medellín, and subsequently Atlético Junior in 1993, with whom he won the Colombian championship in 1993 and 1995. MLS career. Valderrama began his Major League Soccer career with the US side Tampa Bay Mutiny in the league's inaugural 1996 season. The team won the first ever Supporters' Shield, awarded for having the league's best regular season record, while Valderrama was the league's first Most Valuable Player, finishing the season with 4 goals and 17 assists. He remained with the club for the 1997 season, and also spent a spell on loan back at Deportivo Cali in Colombia, before moving to another MLS side, Miami Fusion, in 1998, where he also remained for two seasons. He returned to Tampa Bay in 2000, spending two more seasons with the club; while a member of the Mutiny, the team would sell Carlos Valderrama wigs at Tampa Stadium. In the 2000 MLS season, the 38-year-old Valderrama recorded the only 20+ assist season in MLS history—ending the season with 26 — a single season assist record that remains intact to this day, and which MLS itself suggested was an "unbreakable" record in a 2012 article. In 2001, Valderrama joined the Colorado Rapids, and remained with the team until 2002, when he retired. He played his last career match in a 1–1 draw with the Kansas City Wizards on 20 September 2002, with Valderrama assisting Mark Chung's goal, and in doing so at the age of 41 years and 18 days, he became the oldest player in the league's history at the time, a record that has since been surpassed by four other players, including three goalkeepers. His American soccer league career spanned a total of eight years, during which he made 175 appearances. In the MLS, Valderrama scored relatively few goals (16) for a midfielder, but is the league's fourth all-time leader in assists (114) after Brad Davis (123), Steve Ralston (135) – a former teammate, and Landon Donovan (145). In 2005, he was named to the MLS All-Time Best XI. International career. Valderrama was a member of the Colombia national football team from 1985 until 1998; he made 111 international appearances, scoring 11 goals, making him the most capped outfield player in the country's history. He represented and captained his national side in the 1990, 1994, and 1998 FIFA World Cups, and also took part in the 1987, 1989, 1991, 1993, and 1995 Copa América tournaments. Valderrama made his international debut on 27 October 1985, in a 3–0 defeat to Paraguay in a 1986 World Cup qualifying match, at the age of 24. In his first major international tournament, he helped Colombia to a third-place finish at the 1987 Copa América in Argentina, as his team's captain, where he was named the tournament's best player; during the tournament, he scored the opening goal in Colombia's 2–0 over Bolivia on 1 July, their first match of the group stage. Some of Valderrama's most impressive international performances came during the 1990 FIFA World Cup in Italy, during which he served as Colombia's captain. He helped his team to a 2–0 win against the UAE in Colombia's opening match of the group stage, scoring the second goal of the match with a strike from 20 yards. Colombia lost their second match against Yugoslavia, however, needing at least a draw against the eventual champions West Germany in their final group match in order to advance to the next round of the competition. In the decisive game, German striker Pierre Littbarski scored what appeared to be the winning goal in the 88th minute of the game; however, within the last minute of injury time, Valderrama beat several opposing players and made a crucial left-footed pass to Freddy Rincón, who subsequently equalised, sealing a place for Colombia in the second round of the tournament with a 1–1 draw. Colombia were eliminated in the round of 16, following a 2–1 extra time loss to Cameroon. On 5 September 1993, Valderrama contributed to Colombia's historic 5–0 victory over South American rivals Argentina at the Monumental in Buenos Aires, which allowed them to qualify for the 1994 World Cup. Although much was expected of Valderrama at the World Cup, an injury during a pre-tournament warm-up game put his place in the squad in jeopardy; although he was able to regain match fitness in time for the tournament, Colombia disappointed and suffered a first-round elimination following defeats to Romania and the hosts USA. However, it is widely believed that internal problems and threats by drug cartel groups at the time contributed to the team's underwhelming results in the competition, in particular following the murder of Andrés Escobar after Colombia's 2–1 defeat to the host nation in the second group match; during the match, the Colombian defender had netted an own goal to open the scoring, which ultimately proved to be decisive, despite a 2–0 win over Switzerland in the final first round fixture. Four years later, Valderrama led his nation to qualify for the 1998 World Cup in France, scoring three goals during the qualifying stages. His impact in the final tournament at the advancing age of 37, however, was less decisive, and, despite defeating Tunisia, Colombia once again suffered a first round exit, following a 2–0 defeat against England, which was Valderrama's final international appearance. Playing style. Although Valderrama is often defined as a 'classic number 10 playmaker', due to his creativity and offensive contribution, in reality he was not a classic playmaker in the traditional sense. Although he often wore the number 10 shirt throughout his career and was deployed as an attacking midfielder at times, he played mostly in deeper positions in the centre of the pitch – often operating in a free role as a deep-lying playmaker, rather than in more advanced midfield positions behind the forwards – in order to have a greater influence on the game. A team-player, Valderrama was also known to be an extremely selfless midfielder, who preferred assisting his teammates over going for goal himself; his tactical intelligence, positioning, reading of the game, efficient movement, and versatile range of passing enabled him to find space for himself to distribute and receive the ball, which allowed him both to set the tempo of his team in midfield with short, first time exchanges, or create chances with long lobbed passes or through balls. Valderrama's most instantly recognisable physical features were his big afro-blonde hairstyle, jewelry, and moustache, but he was best known for his grace and elegance on the ball, as well as his agility, and quick feet as a footballer. His control, dribbling ability and footwork were similar to those of smaller players, which for a player of Valderrama's size and physical build was fairly uncommon, and he frequently stood out throughout his career for his ability to use his strength, balance, composure, and flamboyant technique to shield the ball from opponents when put under pressure, and retain possession in difficult situations, often with elaborate skills, which made him an extremely popular figure with the fans. Valderrama's mix of physical strength, two-footed ability, unpredictability and flair enabled him to produce key and incisive performances against top-tier teams, while his world class vision and exceptional passing and crossing ability with his right foot made him one of the best assist providers of his time; his height, physique and elevation also made him effective in the air, and he was also an accurate free kick taker and striker of the ball, despite not being a particularly prolific goalscorer. Despite his natural talent and ability as a footballer, Valderrama earned a reputation for having a "languid" playing style, as well as lacking notable pace, being unfit, and for having a poor defensive work-rate on the pitch, in particular, after succumbing to the physical effects of ageing in his later career in the MLS. In his first season in France, he also initially struggled to adapt to the faster-paced, more physical, and tactically rigorous European brand of football, which saw him play in an unfamiliar position, and gave him less space and time on the ball to dictate attacking passing moves; he was criticised at times for his lack of match fitness and his low defensive contribution, which initially limited his appearances with the club, although he later successfully became a key creative player in his team's starting line-up due to his discipline, skill, and his precise and efficient passing. Despite these claims, earlier in his career, however, Valderrama demonstrated substantial pace, stamina, and defensive competence. Former French defender Laurent Blanc, who played with Valderrama in Montpellier, described him thusly: "In the fast and furious European game he wasn't always at his ease. He was a natural exponent of 'toque', keeping the ball moving. But he was so gifted that we could give him the ball when we didn't know what else to do with it knowing he wouldn't lose it... and often he would do things that most of us only dream about." Retirement and legacy. In February 2004, Valderrama ended his 22-year career in a tribute match at the Metropolitan stadium of Barranquilla, with some of the most important football players of South America, such as Diego Maradona, Enzo Francescoli, Iván Zamorano, and José Luis Chilavert. In 2006, a 22-foot bronze statue of Valderrama, created by Colombian artist Amilkar Ariza, was erected outside Estadio Eduardo Santos in Valderrama's birthplace of Santa Marta. Valderrama was the only Colombian to be featured by Pelé in FIFA's 125 Top Living Football Players list in March 2004. Media. Valderrama appeared on the cover of Konami's "International Superstar Soccer Pro 98". In the Nintendo 64 version of the game, he is referred to by his nickname, "El Pibe". Valderrama has also appeared in EA Sports' FIFA football video game series; he was named one of the Ultimate Team Legend cards in "FIFA 15". Besides his link to videogames, Valderrama has been present in sports media through his work with Fútbol de Primera, Andrés Cantor's radio station. He works as a color commentator during broadcasts of different matches, mostly participating during the FIFA World Cup, alongside play-by-play commentators like Sammy Sadovnik or Cantor himself. Coaching career. Since retiring from professional football, Valderrama has become assistant manager of Atlético Junior. On 1 November 2007, Valderrama accused a referee of corruption by waving cash in the face of Oscar Julian Ruiz when the official awarded a penalty to América de Cali. Junior lost the match 4–1, which ended the club's hopes of playoff qualification. He later also served as a coach for a football academy called Clearwater Galactics in Clearwater, Florida. Personal life. Valderrama "El Pibe" married Claribeth Galván, a woman from La Guajira, with whom he had three children: Alan, who played soccer but dropped out; Kenny, who studies and plays for the Universidad Autónoma del Caribe; and Carlos Alberto, the "gringo" of the family because he grew up in Tampa, Florida, and became a basketball player. From his second marriage, with Elvira Redondo, a native of the coast, with whom he has lived in the U.S. for several years, he has twin daughters Stéphany and Carla, who are now young ladies and the apples of his eye. He is also the father of Carlos Alberto, an illegitimate child that "El Pibe" acknowledged in a lawsuit. "Scores and results list Colombia's goal tally first, score column indicates score after each Valderrama goal." Honours. Montpellier Atletico Junior Tampa Bay Mutiny Individual
6880
35936988
https://en.wikipedia.org/wiki?curid=6880
Caesar salad
A Caesar salad (also spelled Cesar, César and Cesare), also known as Caesar's salad, is a green salad of romaine lettuce and croutons dressed with lemon juice (or lime juice), olive oil, eggs, Worcestershire sauce, anchovies, garlic, Dijon mustard, Parmesan and black pepper. The salad was created on July 4, 1924, by Caesar Cardini at Caesar's in Tijuana, Mexico, when the kitchen was overwhelmed and short on ingredients. It was originally prepared tableside, and it is still prepared tableside at the original venue. History. The salad's creation is generally attributed to the restaurateur Caesar Cardini, an Italian immigrant who operated restaurants in Mexico and the United States. Cardini lived in San Diego, but ran one of his restaurants, Caesar's, in Tijuana, Mexico, to attract American customers seeking to circumvent the restrictions of Prohibition. His daughter, Rosa, recounted that her father invented the salad at the Tijuana restaurant when a Fourth of July rush in 1924 depleted the kitchen's supplies. Cardini made do with what he had, adding the dramatic flair of table-side tossing by the chef. Some other accounts of the history state that Alex Cardini, Caesar Cardini's brother, made the salad, and that the salad was previously named the "Aviator Salad" because it was made for aviators who traveled over during Prohibition. A number of Cardini's staff have also said that they invented the dish. A popular myth attributes its invention to Julius Caesar. A 2024 book confirmed the claim that Caesar Cardini originated the recipe. Livio Santini's son, Aldo, countered that his father provided the recipe while working as a cook in Cardini's restaurant. The American chef and writer Julia Child said that she had eaten a Caesar salad at Cardini's restaurant in her youth during the 1920s, made with whole romaine lettuce leaves, which were meant to be lifted by the stem and eaten with the fingers, tossed with olive oil, salt, pepper, lemon juice, Worcestershire sauce, coddled eggs, Parmesan, and croutons made with garlic-infused oil. In 1946, the newspaper columnist Dorothy Kilgallen wrote of a Caesar containing anchovies, differing from Cardini's version: The big food rage in Hollywood—the Caesar salad—will be introduced to New Yorkers by Gilmore's Steak House. It's an intricate concoction that takes ages to prepare and contains (zowie!) lots of garlic, raw or slightly coddled eggs, croutons, romaine, anchovies, parmeasan cheese, olive oil, vinegar and plenty of black pepper. In a 1952 interview, Cardini said the salad became well known in 1937, when Manny Wolf, story editor and Paramount Pictures writer's department head, provided the recipe to Hollywood restaurants.<ref name="stuff/10429532"></ref><ref name="kitchenproject/CaesarSalad"></ref> In the 1970s, Child published a recipe in her book "From Julia Child's Kitchen", based on an interview with Cardini's daughter, in which the ingredients are tossed one-at-a-time with the lettuce leaves. Cardini's daughter and several other sources have testified that the original recipe used only Worcestershire sauce, not anchovies, mustard, or herbs, which Cardini considered too bold in flavor. Modern recipes typically include anchovies as a key ingredient, and are frequently emulsified or based on mayonnaise. Dressing. Bottled Caesar dressings are produced and marketed by many companies, including Cardini's, Bolthouse Farms, Ken's Foods, Marzetti, Newman's Own, Panera Bread, Trader Joe's, and Whole Foods Market. The trademark brands, "Cardini's", "Caesar Cardini's" and "The Original Caesar Dressing" are all claimed to date to February 1950, although they were only registered decades later. Ingredients. Common ingredients in many recipes: * Romaine lettuce * olive oil * crushed garlic * salt * Dijon mustard * black pepper * lemon juice * Worcestershire sauce * anchovies * whole eggs or egg yolks, raw, poached or coddled * grated Parmesan cheese * croutons Variations include varying the leaf, adding meat such as grilled chicken or bacon, or omitting ingredients such as anchovies and eggs. While the original Caesar's in Tijuana uses lime juice in their current recipe, most modern recipes use lemon juice or vinegar. Some chefs experiment more broadly with variations of the salad, using the familiar, appealing "Caesar" name to attract diners to dishes with a similar hit of "umami, fat, and tons of salt" that otherwise bear little resemblance to the original.
6881
28117617
https://en.wikipedia.org/wiki?curid=6881
Cecilia Beaux
Eliza Cecilia Beaux (May 1, 1855 – September 17, 1942) was an American artist and the first woman to teach art at the Pennsylvania Academy of the Fine Arts. Known for her elegant and sensitive portraits of friends, relatives, and Gilded Age patrons, Beaux painted many famous subjects including First Lady Edith Roosevelt, Admiral Sir David Beatty and Georges Clemenceau. Beaux was trained in Philadelphia and went on to study in Paris where she was influenced by academic artists Tony Robert-Fleury and William-Adolphe Bouguereau as well as the work of Édouard Manet and Edgar Degas. Her style was compared to that of John Singer Sargent; at one exhibition, Bernard Berenson joked that her paintings were the best Sargents in the room. Like her instructor William Sartain, she believed there was a connection between physical characteristics and behavioral traits. Beaux was awarded a gold medal for lifetime achievement by the National Institute of Arts and Letters, and honored by Eleanor Roosevelt as "the American woman who had made the greatest contribution to the culture of the world". Early life and education. Beaux was born on May 1, 1855, in Philadelphia, the younger daughter of French silk manufacturer Jean Adolphe Beaux and teacher Cecilia Kent Leavitt. Her mother was the daughter of prominent businessman John Wheeler Leavitt of New York City and his wife, Cecilia Kent of Suffield, Connecticut. Cecilia Kent Leavitt died from puerperal fever 12 days after giving birth at age 33. Cecilia and her sister Etta were subsequently raised by their maternal grandmother and aunts, primarily in Philadelphia. Her father, unable to bear the grief of his loss, and feeling adrift in a foreign country, returned to his native France for 16 years, with only one visit back to Philadelphia. He returned when Cecilia was two, but left four years later after his business failed. As she confessed later, "We didn't love Papa very much, he was so foreign. We thought him "peculiar"." Her father did have a natural aptitude for drawing and the sisters were charmed by his whimsical sketches of animals. Later, Beaux would discover that her French heritage would serve her well during her pilgrimage and training in France. In Philadelphia, Beaux's aunt Emily married mining engineer William Foster Biddle, whom Beaux would later describe as "after my grandmother, the strongest and most beneficent influence in my life." For fifty years, he cared for his nieces-in-law with consistent attention and occasional financial support. Her grandmother, on the other hand, provided day-to-day supervision and kindly discipline. Whether with housework, handiwork, or academics, Grandma Leavitt offered a pragmatic framework, stressing that "everything undertaken must be completed, conquered." The Civil War years were particularly challenging, but the extended family survived despite little emotional or financial support from Beaux's father. After the war, Beaux began to spend some time in the household of "Willie" and Emily, both proficient musicians. Beaux learned to play the piano but preferred singing. The musical atmosphere later proved an advantage for her artistic ambitions. Beaux recalled, "They understood perfectly the spirit and necessities of an artist's life." In her early teens, she had her first major exposure to art during visits with Willie to the nearby Pennsylvania Academy of the Fine Arts, one of America's foremost art schools and museums. Though fascinated by the narrative elements of some of the pictures, particularly the Biblical themes of the massive paintings of Benjamin West, at this point Beaux had no aspirations of becoming an artist. Her childhood was a sheltered though generally happy one. As a teen she already manifested the traits, as she described, of "both a realist and a perfectionist, pursued by an uncompromising passion for carrying through." She attended the Misses Lyman School and was just an average student, though she did well in French and Natural History. However, she was unable to afford the extra fee for art lessons. At age 16, Beaux began art lessons with a relative, Catherine Ann Drinker, an accomplished artist who had her own studio and a growing clientele. Drinker became Beaux's role model, and she continued lessons with Drinker for a year. She then studied for two years with the painter Francis Adolf Van der Wielen, who offered lessons in perspective and drawing from casts during the time that the new Pennsylvania Academy of the Fine Arts was under construction. Given the bias of the Victorian age, female students were denied direct study in anatomy and could not attend drawing classes with live models (who were often prostitutes) until a decade later. At 18, Beaux was appointed as a drawing teacher at Miss Sanford's School, taking over Drinker's post. She also gave private art lessons and produced decorative art and small portraits. Her own studies were mostly self-directed. Beaux received her first introduction to lithography doing copy work for Philadelphia printer Thomas Sinclair and she published her first work in "St. Nicholas" magazine in December 1873. Beaux demonstrated accuracy and patience as a scientific illustrator, creating drawings of fossils for Edward Drinker Cope, for a multi-volume report sponsored by the U.S. Geological Survey. However, she did not find technical illustration suitable for a career (the extreme exactitude required gave her pains in the "solar plexus"). At this stage, she did not yet consider herself an artist. Beaux began attending the Pennsylvania Academy of the Fine Arts in Philadelphia in 1876, then under the dynamic influence of Thomas Eakins, whose work "The Gross Clinic" had "horrified Philadelphia Exhibition-goers as a gory spectacle" at the Centennial Exhibition of 1876. She steered clear of the controversial Eakins, though she much admired his work. His progressive teaching philosophy, focused on anatomy and live study and allowed the female students to partake in segregated studios, eventually led to his firing as director of the academy. She did not ally herself with Eakins' ardent student supporters, and later wrote, "A curious instinct of self-preservation kept me outside the magic circle." Instead, she attended costume and portrait painting classes for three years taught by the ailing director Christian Schussele. Beaux won the Mary Smith Prize at the Pennsylvania Academy of the Fine Arts exhibitions in 1885, 1887, 1891, and 1892. After leaving the academy, the 24-year-old Beaux decided to try her hand at porcelain painting and she enrolled in a course at the National Art Training School. She was well suited to the precise work but later wrote, "this was the lowest depth I ever reached in commercial art, and although it was a period when youth and romance were in their first attendance on me, I remember it with gloom and record it with shame." She studied privately with William Sartain, a friend of Eakins and a New York artist invited to Philadelphia to teach a group of art students, starting in 1881. Though Beaux admired Eakins more and thought his painting skill superior to Sartain's, she preferred the latter's gentle teaching style which promoted no particular aesthetic approach. Unlike Eakins, however, Sartain believed in phrenology and Beaux adopted a lifelong belief that physical characteristics correlated with behaviors and traits. Beaux attended Sartain's classes for two years, then rented her own studio and shared it with a group of women artists who hired a live model and continued without an instructor. After the group disbanded, Beaux set in earnest to prove her artistic abilities. She painted a large canvas in 1884, "Les Derniers Jours d'Enfance", a portrait of her sister and nephew whose composition and style revealed a debt to James McNeill Whistler and whose subject matter was akin to Mary Cassatt's mother-and-child paintings. It was awarded a prize for the best painting by a female artist at the academy, and further exhibited in Philadelphia and New York. Following that seminal painting, she painted over 50 portraits in the next three years with the zeal of a committed professional artist. Her invitation to serve as a juror on the hanging committee of the academy confirmed her acceptance amongst her peers. In the mid-1880s, she was receiving commissions from notable Philadelphians and earning $500 per portrait, comparable to what Eakins commanded. When her friend Margaret Bush-Brown insisted that "Les Derniers" was good enough to be exhibited at the famed Paris Salon, Beaux relented and sent the painting abroad in the care of her friend, who managed to get the painting into the exhibition. Paris. At 32, despite her success in Philadelphia, Beaux decided that she still needed to advance her skills. She left for Paris with cousin May Whitlock, forsaking several suitors and overcoming the objections of her family. There she trained at the Académie Julian, the largest art school in Paris, and at the Académie Colarossi, receiving weekly critiques from established masters like Tony Robert-Fleury and William-Adolphe Bouguereau. She wrote, "Fleury is much less benign than Bouguereau and don't temper his severities…he hinted of possibilities before me and as he rose said the nicest thing of all, 'we will do all we can to help you'…I want these men…to know me and recognize that I can do something." Though advised regularly of Beaux's progress abroad and to "not be worried about any indiscretions of ours", her Aunt Eliza repeatedly reminded her niece to avoid the temptations of Paris, "Remember you are first of all a Christian – then a woman and last of all an Artist." When Beaux arrived in Paris, the Impressionists, a group of artists who had begun their own series of independent exhibitions from the official Salon in 1874, were beginning to lose their solidarity. Also known as the "Independents" or "Intransigents", the group which at times included Degas, Monet, Sisley, Caillebotte, Pissarro, Renoir, and Berthe Morisot, had been receiving the wrath of the critics for several years. Their art, though varying in style and technique, was the antithesis of the type of Academic art in which Beaux was trained and of which her teacher William-Adolphe Bouguereau was a leading master. In the summer of 1888, with classes in summer recess, Beaux worked in the fishing village of Concarneau with the American painters Alexander Harrison and Charles Lazar. She tried applying the plein-air painting techniques used by the Impressionists to her own landscapes and portraiture, with little success. Unlike her predecessor Mary Cassatt, who had arrived near the beginning of the Impressionist movement 15 years earlier and who had absorbed it, Beaux's artistic temperament, precise and true to observation, would not align with Impressionism and she remained a realist painter for the rest of her career, even as Cézanne, Matisse, Gauguin, and Picasso were beginning to take art into new directions. Beaux mostly admired classic artists like Titian and Rembrandt. Her European training did influence her palette, however, and she adopted more white and paler coloration in her oil painting, particularly in depicting female subjects, an approach favored by Sargent as well. Return to Philadelphia. Back in the United States in 1889, Beaux proceeded to paint portraits in the grand manner, taking as her subjects members of her sister's family and of Philadelphia's elite. In making her decision to devote herself to art, she also thought it was best not to marry, and in choosing male company she selected men who would not threaten to sidetrack her career. She resumed life with her family, and they supported her fully, acknowledging her chosen path and demanding of her little in the way of household responsibilities, "I was never once asked to do an errand in town, some bit of shopping…so well did they understand." She developed a structured, professional routine, arriving promptly at her studio, and expected the same from her models. The five years that followed were highly productive, resulting in over forty portraits. In 1890 she exhibited at the Paris Exposition, obtained in 1893 the gold medal of the Philadelphia Art Club, and also the Dodge prize at the New York National Academy of Design. She exhibited her work at the Palace of Fine Arts and The Woman's Building at the 1893 World's Columbian Exposition in Chicago, Illinois. Her portrait of "The Reverend Matthew Blackburne Grier" was particularly well-received, as was "Sita and Sarita", a portrait of her cousin Charles W. Leavitt's wife Sarah (Allibone) Leavitt in white, with a small black cat perched on her shoulder, both gazing out mysteriously. The mesmerizing effect prompted one critic to point out "the witch-like weirdness of the black kitten" and for many years, the painting solicited questions by the press. But the result was not pre-planned, as Beaux's sister later explained, "Please make no mystery about it—it was only an idea to put the black kitten on her cousin's shoulder. Nothing deeper." Beaux donated "Sita and Sarita" to the Musée du Luxembourg, but only after making a copy for herself. Another highly regarded portrait from that period is "New England Woman" (1895), a nearly all-white oil painting which was purchased by the Pennsylvania Academy of the Fine Arts. In 1895, Beaux became the first woman to have a regular teaching position at the Pennsylvania Academy of the Fine Arts, where she instructed in portrait drawing and painting for the next twenty years. That rare type of achievement by a woman prompted one local newspaper to state, "It is a legitimate source of pride to Philadelphia that one of its most cherished institutions has made this innovation." She was a popular instructor. In 1896, Beaux returned to France to see a group of her paintings presented at the Salon. Influential French critic M. Henri Rochefort commented, "I am compelled to admit, not without some chagrin, that not one of our female artists…is strong enough to compete with the lady who has given us this year the portrait of Dr. Grier. Composition, flesh, texture, sound drawing—everything is there without affectation, and without seeking for effect." In 1898, Beaux painted probably her finest portrait, Man with the Cat (Henry Sturgis Drinker), now in Smithsonian American Art Museum. Drinker was Beaux's very successful brother-in-law. Cecilia Beaux considered herself a "New Woman", a 19th-century woman who explored educational and career opportunities that had generally been denied to women. In the late 19th century Charles Dana Gibson depicted the "New Woman" in his painting, "The Reason Dinner was Late", which is "a sympathetic portrayal of artistic aspiration on the part of young women" as she paints a visiting policeman. This "New Woman" was successful, highly trained, and often did not marry; other such women included Ellen Day Hale, Mary Cassatt, Elizabeth Nourse and Elizabeth Coffin. Beaux was a member of Philadelphia's The Plastic Club. Other members included Elenore Abbott, Jessie Willcox Smith, Violet Oakley, Emily Sartain, and Elizabeth Shippen Green. Many of the women who founded the organization had been students of Howard Pyle. It was founded to provide a means to encourage one another professionally and create opportunities to sell their works of art. New York City. By 1900 the demand for Beaux's work brought clients from Washington, D.C., to Boston, prompting the artist to move to New York City, where she spent the winters, while summering at Green Alley, the home and studio she had built in Gloucester, Massachusetts. Beaux's friendship with Richard Gilder, editor-in-chief of the literary magazine "The Century", helped promote her career and he introduced her to the elite of society. Among her portraits which followed from that association are those of Georges Clemenceau; First Lady Edith Roosevelt and her daughter; and Admiral Sir David Beatty. She also sketched President Teddy Roosevelt during her White House visits in 1902, during which "He sat for two hours, talking most of the time, reciting Kipling, and reading scraps of Browning." Beaux also became very close with Gilder's daughter Dorothea, and the two women exchanged affectionate letters for many years. Her portraits "Fanny Travis Cochran", "Dorothea and Francesca", and "Ernesta and her Little Brother", are fine examples of her skill in painting children; "Ernesta with Nurse", one of a series of essays in luminous white, was a highly original composition, seemingly without precedent. She became a member of the National Academy of Design in 1902. and won the Logan Medal of the arts at the Art Institute of Chicago in 1921. Green Alley. By 1906, Beaux began to live year-round at Green Alley, in a comfortable colony of "cottages" belonging to her wealthy friends and neighbors. All three aunts had died and she needed an emotional break from Philadelphia and New York City. She managed to find new subjects for portraiture, working in the mornings and enjoying a leisurely life the rest of the time. She carefully regulated her energy and her activities to maintain a productive output, and considered that a key to her success. On why so few women succeeded in art as she did, she stated, "Strength is the stumbling block. They (women) are sometimes unable to stand the hard work of it day in and day out. They become tired and cannot reenergize themselves." While Beaux stuck to her portraits of the elite, American art was advancing into urban and social subject matter, led by artists such as Robert Henri who espoused a totally different aesthetic, "Work with great speed..Have your energies alert, up and active. Do it all in one sitting if you can. In one minute if you can. There is no use delaying…Stop studying water pitchers and bananas and paint everyday life." He advised his students, among them Edward Hopper and Rockwell Kent, to live with the common man and paint the common man, in total opposition to Cecilia Beaux's artistic methods and subjects. The clash of Henri and William Merritt Chase (representing Beaux and the traditional art establishment) resulted in 1907 in the independent exhibition by the urban realists known as "The Eight" or the Ashcan School. Beaux and her art friends defended the old order, and many thought (and hoped) the new movement to be a passing fad, but it turned out to be a revolutionary turn in American art. In 1910, her beloved Uncle Willie died. Though devastated by the loss, at 55 year old, Beaux remained highly productive. In the next five years she painted almost 25 percent of her lifetime output and received a steady stream of honors. She had a major exhibition of 35 paintings at the Corcoran Gallery of Art in Washington, D.C., in 1912. Despite her continuing production and accolades, however, Beaux was working against the current of tastes and trends in art. The famed "Armory Show" of 1913 in New York City was a landmark presentation of 1,200 paintings showcasing Modernism. Beaux believed that the public, initially of mixed opinion about the "new" art, would ultimately reject it and return its favor to the Pre-Impressionists. Beaux was crippled after breaking her hip while walking in Paris in 1924. With her health impaired, her work output dwindled for the remainder of her life. That same year Beaux was asked to produce a self-portrait for the Medici collection in the Uffizi Gallery in Florence. In 1930 she published an autobiography, "Background with Figures". Her later life was filled with honors. In 1930 she was elected a member of the National Institute of Arts and Letters; in 1933 came membership in the American Academy of Arts and Letters, which two years later organized the first major retrospective of her work. Also in 1933 Eleanor Roosevelt honored Beaux as "the American woman who had made the greatest contribution to the culture of the world". In 1942 The National Institute of Arts and Letters awarded her a gold medal for lifetime achievement. Death. Beaux died at the age of 87 on September 17, 1942, in Gloucester, Massachusetts. She was interred at West Laurel Hill Cemetery in Bala Cynwyd, Pennsylvania. In her will she left a Duncan Phyfe rosewood secretaire made for her father to her cherished nephew Cecil Kent Drinker, a Harvard University physician whom she had painted as a young boy and who later founded the Harvard School of Public Health. Legacy. Beaux was included in the 2018 exhibit "Women in Paris 1850-1900" at the Clark Art Institute. Though Beaux was an individualist, comparisons to Sargent would prove inevitable, and often favorable. Her strong technique, her perceptive reading of her subjects, and her ability to flatter without falsifying, were traits similar to his. "The critics are very enthusiastic. (Bernard) Berenson, Mrs. Coates tells me, stood in front of the portraits – Miss Beaux's three – and wagged his head. 'Ah, yes, I see!' Some Sargents. The ordinary ones are signed John Sargent, the best are signed Cecilia Beaux, which is, of course, nonsense in more ways than one, but it is part of the generous chorus of praise." Though overshadowed by Mary Cassatt and relatively unknown to museum-goers today, Beaux's craftsmanship and extraordinary output were highly regarded in her time. While presenting the Carnegie Institute's Gold Medal to Beaux in 1899, William Merritt Chase stated "Miss Beaux is not only the greatest living woman painter, but the best that has ever lived. Miss Beaux has done away entirely with sex [gender] in art." During her long productive life as an artist, she maintained her personal aesthetic and high standards against all distractions and countervailing forces. She constantly struggled for perfection. "A perfect technique in anything," she stated in an interview, "means that there has been no break in continuity between the conception and the act of performance." She summed up her driving work ethic, "I can say this: When I attempt anything, I have a passionate determination to overcome every obstacle…And I do my own work with a refusal to accept defeat that might almost be called painful."
6882
1300832296
https://en.wikipedia.org/wiki?curid=6882
Chrysler
FCA US, LLC, doing business as Stellantis North America and known historically as Chrysler ( ), is one of the "Big Three" automobile manufacturers in the United States, headquartered in Auburn Hills, Michigan. It is the American subsidiary of the multinational automotive company Stellantis. Stellantis North America sells vehicles worldwide under the Chrysler, Dodge, Jeep, and Ram Trucks nameplates. It also includes Mopar, its automotive parts and accessories division, and SRT, its performance automobile division. The division also distributes Alfa Romeo, Fiat, and Maserati vehicles in North America. The original Chrysler Corporation was founded in 1925 by Walter Chrysler from the remains of the Maxwell Motor Company. In 1998, it merged with Daimler-Benz, which renamed itself DaimlerChrysler but in 2007 sold off its Chrysler stake. The company operated as Chrysler LLC through 2009, then as Chrysler Group LLC. In 2014, it was acquired by Fiat S.p.A.; it subsequently operated as a subsidiary of the new Fiat Chrysler Automobiles (FCA), then as a subsidiary of Stellantis, the company formed from the 2021 merger of FCA and PSA Group (Peugeot Société Anonyme). After founding the company, Walter Chrysler used the General Motors brand diversification and hierarchy strategy that he had become familiar with when he worked in the Buick division at General Motors. He then acquired Fargo Trucks and the Dodge Brothers Company, and created the Plymouth and DeSoto brands in 1928. Facing postwar declines in market share, productivity, and profitability, as GM and Ford were growing, Chrysler borrowed $250 million in 1954 from Prudential Insurance to pay for expansion and updated car designs. Chrysler expanded into Europe by taking control of French, British, and Spanish auto companies in the 1960s; Chrysler Europe was sold in 1978 to PSA Peugeot Citroën for a nominal $1. The company struggled to adapt to changing markets, increased U.S. import competition, and safety and environmental regulation in the 1970s. It began an engineering partnership with Mitsubishi Motors, and began selling Mitsubishi vehicles branded as Dodge and Plymouth in North America. On the verge of bankruptcy in the late 1970s, it was saved by $1.5 billion in loan guarantees from the U.S. government. New CEO Lee Iacocca was credited with returning the company to profitability in the 1980s. In 1985, Diamond-Star Motors was created, further expanding the Chrysler-Mitsubishi relationship. In 1987, Chrysler acquired American Motors Corporation (AMC), which brought the profitable Jeep, as well as the newly formed Eagle, brands under the Chrysler umbrella. In 1998, Chrysler merged with German automaker Daimler-Benz to form DaimlerChrysler AG; the merger proved contentious with investors. As a result, Chrysler was sold to Cerberus Capital Management and renamed Chrysler LLC in 2007. Like the other Big Three automobile manufacturers, Chrysler was impacted by the automotive industry crisis of 2008–2010. The company remained in business through a combination of negotiations with creditors, filing for Chapter 11 bankruptcy reorganization on April 30, 2009, and participating in a bailout from the U.S. government through the Troubled Asset Relief Program. On June 10, 2009, Chrysler emerged from the bankruptcy proceedings with the United Auto Workers pension fund, Fiat S.p.A., and the U.S. and Canadian governments as principal owners. The bankruptcy resulted in Chrysler defaulting on over $4 billion in debts. In May 2011, Chrysler finished repaying its obligations to the U.S. government five years early, although the cost to the American taxpayer was $1.3 billion. Over the next few years, Fiat S.p.A. gradually acquired the other parties' shares. In January 2014, Fiat acquired the rest of Chrysler from the United Auto Workers retiree health trust, making Chrysler Group a subsidiary of Fiat S.p.A. In May 2014, Fiat Chrysler Automobiles was established by merging Fiat S.p.A. into the company. Chrysler Group LLC remained a subsidiary until December 15, 2014, when it was renamed FCA US LLC, to reflect the Fiat-Chrysler merger. As a result of the merger between FCA and PSA, on 17 January 2021 it became a subsidiary of the Stellantis Group. History. 1925–1998: Chrysler Corporation. The Chrysler company was founded by Walter Chrysler on June 6, 1925, when the Maxwell Motor Company (est. 1904) was re-organized into the Chrysler Corporation. The company was headquartered in the Detroit enclave of Highland Park, where it remained until completing the move to its present Auburn Hills location in 1996. Chrysler had arrived at the ailing Maxwell-Chalmers company in the early 1920s, hired to overhaul the company's troubled operations (after a similar rescue job at the Willys-Overland car company). In late 1923, production of the Chalmers automobile was ended. In January 1924, Walter Chrysler launched the well-received Chrysler automobile. The Chrysler Six was designed to provide customers with an advanced, well-engineered car, at an affordable price. Elements of this car are traceable to a prototype which had been under development at Willys during Chrysler's tenure The original 1924 Chrysler included a carburetor air filter, high compression engine, full pressure lubrication, and an oil filter, features absent from most autos at the time. Among the innovations in its early years were the first practical mass-produced four-wheel hydraulic brakes, a system nearly completely engineered by Chrysler with patents assigned to Lockheed, and rubber engine mounts, called "Floating Power" to reduce vibration. Chrysler also developed a wheel with a ridged rim, designed to keep a deflated tire from flying off the wheel. This wheel was eventually adopted by the auto industry worldwide. The Maxwell brand was dropped after the 1925 model year, with the new, lower-priced four-cylinder Chryslers introduced for the 1926 year being badge-engineered Maxwells. The advanced engineering and testing that went into Chrysler Corporation cars helped to push the company to the second-place position in U.S. sales by 1936, which it held until 1949. In 1928, the Chrysler Corporation began dividing its vehicle offerings by price class and function. The Plymouth brand was introduced at the low-priced end of the market (created essentially by once again reworking and rebadging the Chrysler Series 50 four-cylinder model). At the same time, the DeSoto brand was introduced in the medium-price field. Also in 1928, Chrysler bought the Dodge Brothers automobile and truck company and continued the successful Dodge line of automobiles and Fargo range of trucks. By the mid-1930s, the DeSoto and Dodge divisions would trade places in the corporate hierarchy. The Imperial name had been used since 1926 but was never a separate make, just the top-of-the-line Chrysler. However, in 1955, the company decided to offer it as its own make/brand and division to better compete with its rivals, Lincoln and Cadillac. This addition changed the company's traditional four-make lineup to five (in order of price from bottom to top): Plymouth, Dodge, DeSoto, Chrysler, and the now-separate Imperial. On April 28, 1955, Chrysler and Philco announced the development and production of the World's First All-Transistor car radio. The all-transistor car radio, Mopar model 914HR, developed and produced by Chrysler and Philco, was a $150 option on the 1956 Imperial automobile models. Philco began manufacturing this radio in the fall of 1955 at its Sandusky Ohio plant. On September 28, 1957, Chrysler announced the first production electronic fuel injection (EFI), as an option on some of its new 1958 car models (Chrysler 300D, Dodge D500, DeSoto Adventurer, Plymouth Fury). The first attempt to use this system was by American Motors on the 1957 Rambler Rebel. Bendix Corporation's Electrojector used a transistor "computer brain" modulator box, but teething problems on pre-production cars meant very few cars were made. The EFI system in the Rambler ran fine in warm weather, but suffered hard starting in cooler temperatures and AMC decided not to use this EFI system on its 1957 Rambler Rebel production cars that were sold to the public. Chrysler also used the Bendix "Electrojector" fuel injection system and only around 35 vehicles were built with this option, on its 1958 production-built car models. Owners of EFI Chryslers were so dissatisfied that all but one were retrofitted with carburetors (while that one has been completely restored, with original EFI electronic problems resolved). The Valiant was also introduced for the 1960 model year as a distinct brand. In the U.S. market, Valiant was made a model in the Plymouth line for 1961 and the DeSoto make was discontinued in 1961. With those exceptions per applicable year and market, Chrysler's range from lowest to highest price from the 1940s through the 1970s was Valiant, Plymouth, Dodge, DeSoto, Chrysler, and Imperial. In 1954, Chrysler was the exclusive provider of its Hemi V8 engine in the Facel Vega, a French coachbuilder that offered their own line of hand-built luxury performance cars, coupled with the PowerFlite and TorqueFlite automatic transmissions. The Facel Vega Excellence was a four-door hardtop with rear-hinged coach doors that listed for US$12,800 ($ in dollars ). In 1960 Facel Vega introduced the smaller Facellia sports car to capitalize on their sales success with Chrysler supplied engines. At the time, Chrysler didn't produce a four-cylinder engine, and the company had to find alternatives before production began. In 1960, Chrysler became the first of the "Big Three" automakers to switch to unibody construction for its passenger cars, with the exception of Imperial, which continued to be produced on the body-on-frame platform prevalent in the US until 1967, when the unibody construction had proven itself to be reliable enough that Chrysler was willing to also apply it to its flagship. From 1963 through 1969, Chrysler increased its existing stakes to take complete control of the French Simca, British Rootes, and Spanish Barreiros companies, merging them into Chrysler Europe in 1967. In the 1970s, an engineering partnership was established with Mitsubishi Motors, and Chrysler began selling Mitsubishi vehicles branded as Dodge and Plymouth in North America. Chrysler struggled to adapt to the changing environment of the 1970s. When consumer tastes shifted to smaller cars in the early 1970s, particularly after the 1973 oil crisis, Chrysler could not meet the demand, although their compact models on the "A" body platform, the Dodge Dart and Plymouth Valiant, had proven economy and reliability and sold very well. Additional burdens came from increased US import competition, and tougher government regulation of car safety, fuel economy, and emissions. As the smallest of the Big 3 US automakers, Chrysler lacked the financial resources to meet all of these challenges. 1975 would be the last year for Imperial, except for an ill-fated attempt at revival in 1981-1983, for its low sales no longer justified it as a separate brand when it didn't offer much over a high-end Chrysler New Yorker. In 1976, with the demise of the reliable Dart/Valiant, quality control declined. Their replacements, the Dodge Aspen and Plymouth Volare, were comfortable and had good roadability, but owners soon experienced major reliability problems which crept into other models as well. Engines failed and/or did not run well, and premature rust plagued bodies. In 1978, Lee Iacocca was brought in to turn the company around, and in 1979 Iacocca sought US government help. Congress later passed the "Loan Guarantee Act" providing $1.5 billion in loan guarantees. The "Loan Guarantee Act" required that Chrysler also obtain $2 billion in concessions or aid from sources outside the federal government, which included interest rate reductions for $650 million of the savings, asset sales of $300 million, local and state tax concessions of $250 million, and wage reductions of about $590 million along with a $50 million stock offering. $180 million was to come from concessions from dealers and suppliers. Also in 1978, Iacocca offloaded the ailing European operation to PSA Peugeot Citroën for a nominal $1, taking with it the group's substantial losses and debts which had been dragging the rest of the business down. After a period of plant closures and salary cuts agreed to by both management and the auto unions, the Plymouth Reliant and Dodge Aries compact was introduced in 1981 on the all-new Chrysler K platform, which was developed from the Plymouth Horizon and Dodge Omni hatchbacks, which were introduced in 1978. Chrysler returned to profitability in 1980, repaying the loans with interest in 1983. The Omni/Horizon were first offered with four-cylinder engines provided by Chrysler Europe brand Simca, then Volkswagen, until the all-new Chrysler engineered Chrysler K engine arrived. Chrysler had not manufactured a four-cylinder engine since 1933 when the Chrysler flathead four-cylinder was canceled. In November 1983, the Dodge Caravan/Plymouth Voyager was introduced, built on a modified K platform, establishing the minivan as a major category, and initiating Chrysler's return to stability. In 1985, Diamond-Star Motors was created, further expanding the Chrysler-Mitsubishi relationship. In 1985, Chrysler entered an agreement with American Motors Corporation to produce Chrysler M platform rear-drive, as well as Dodge Omnis front wheel drive cars, in AMC's Kenosha, Wisconsin, plant. In 1987, Chrysler acquired the 47% ownership of AMC that was held by Renault. The remaining outstanding shares of AMC were bought on the NYSE by August 5, 1987, making the deal valued somewhere between US$1.7 billion and US$2 billion, depending on how costs were counted. Chrysler CEO Lee Iacocca wanted the Jeep brand, particularly the Jeep Grand Cherokee (ZJ) that was under development, the new world-class manufacturing plant in Bramalea, Ontario, and AMC's engineering and management talent that became critical for Chrysler's future success. Chrysler established the Jeep/Eagle division as a "specialty" arm to market products distinctly different from the K-car-based products with the Eagle cars targeting import buyers. Former AMC dealers sold Jeep vehicles and various new Eagle models, as well as Chrysler products, strengthening the automaker's retail distribution system. Eurostar, a joint venture between Chrysler and Steyr-Daimler-Puch, began producing the Chrysler Voyager in Austria for European markets in 1992. 1998–2007: DaimlerChrysler. In 1998, Chrysler and its subsidiaries entered into a partnership dubbed a "merger of equals" with German-based Daimler-Benz AG, creating the combined entity DaimlerChrysler AG. To the surprise of many stockholders, Daimler acquired Chrysler in a stock swap before Chrysler CEO Bob Eaton retired. Under DaimlerChrysler, the company was named DaimlerChrysler Motors Company LLC, with its U.S. operations generally called "DCX". The Eagle brand was retired soon after Chrysler's merger with Daimler-Benz in 1998 Jeep became a stand-alone division, and efforts were made to merge the Chrysler and Jeep brands as one sales unit. In 2001, the Plymouth brand was also discontinued. Eurostar also built the Chrysler PT Cruiser in 2001 and 2002. The Austrian venture was sold to Magna International in 2002 and became Magna Steyr. The Voyager continued in production until 2007, whereas the Chrysler 300C, Jeep Grand Cherokee, and Jeep Commander were also built at the plant from 2005 until 2010. On May 14, 2007, DaimlerChrysler announced the sale of 80.1% of Chrysler Group to American private equity firm Cerberus Capital Management, L.P., thereafter known as Chrysler LLC, although Daimler (renamed as Daimler AG) continued to hold a 19.9% stake. 2007–2014: Effects of Great Recession. The economic collapse during the 2008 financial crisis pushed the company to the brink. On April 30, 2009, the automaker filed for Chapter 11 bankruptcy protection to be able to operate as a going concern, while renegotiating its debt structure and other obligations, which resulted in the corporation defaulting on over $4 billion in secured debts. The U.S. government described the company's action as a "prepackaged surgical bankruptcy". On June 10, 2009, substantially all of Chrysler's assets were sold to "New Chrysler", organized as Chrysler Group LLC. The federal government provided support for the deal with US$8 billion in financing at nearly 21%. Under CEO Sergio Marchionne, "World Class Manufacturing" or WCM, a system of thorough manufacturing quality, was introduced and several products were re-launched with quality and luxury. The Ram, Jeep, Dodge, SRT, and Chrysler divisions were separated to focus on their own identity and brand, and 11 major model refreshes occurred in 21 months. The PT Cruiser, Nitro, Liberty and Caliber models (created during DCX) were discontinued. On May 24, 2011, Chrysler repaid its $7.6 billion loans to the United States and Canadian governments. The US Treasury, through the Troubled Asset Relief Program (TARP), invested $12.5 billion in Chrysler and recovered $11.2 billion when the company shares were sold in May 2011, resulting in a $1.3 billion loss. On July 21, 2011, Fiat bought the Chrysler shares held by the US Treasury. The purchase made Chrysler foreign-owned again, this time as the luxury division. The Chrysler 300 was badged Lancia Thema in some European markets (with additional engine options), giving Lancia a much-needed replacement for its flagship. 2014–2021: Fiat Chrysler Automobiles. On January 21, 2014, Fiat bought the remaining shares of Chrysler owned by the VEBA worth $3.65 billion. Several days later, the intended reorganization of Fiat and Chrysler under a new holding company, Fiat Chrysler Automobiles, together with a new FCA logo were announced. The most challenging launch for this new company came immediately in January 2014 with a completely redesigned Chrysler 200. The vehicle's creation is from the completely integrated company, FCA, executing from a global compact-wide platform. On December 16, 2014, Chrysler Group LLC announced a name change to FCA US LLC. On January 12, 2017, FCA shares traded at the New York Stock Exchange lost value after the EPA accused FCA US of using emissions cheating software to evade diesel-emissions tests, however the company countered the accusations, and the chairman and CEO Sergio Marchionne sternly rejected them. The following day, shares rose as investors played down the effect of the accusations. Analysts gave estimates of potential fines from several hundred million dollars to $4 billion, although the likelihood of a hefty fine was low. Senior United States Senator Bill Nelson urged the FTC to look into possible deceptive marketing of the company's diesel-powered SUVs. Shares dropped 2.2% after the announcement. FCA US would in 2022, plead guilty to a criminal charge of conspiring to defraud the US, to wire fraud, and to violate the Clean Air Act. On July 21, 2018, Sergio Marchionne stepped down as chairman and CEO for health reasons, and was replaced by John Elkann and Michael Manley, respectively. As a result of ending domestic production of more fuel-efficient passenger automobiles such as the Dodge Dart and Chrysler 200 sedans, FCA US elected to pay $77 million in fines for violating the anti-backsliding provision of fuel economy standards set under the Energy Independence and Security Act of 2007 for its model year 2016 fleet. It was again fined for the 2017 model year for not meeting the minimum domestic passenger car standard. FCA described the $79 million civil penalty as "not expected to have a material impact on its business." As part of a January 2019 settlement, Fiat Chrysler was to recall and repair approximately 100,000 automobiles equipped with a 3.0-liter V6 EcoDiesel engine having a prohibited defeat device, pay $311 million in total civil penalties to US regulators and CARB, pay $72.5 million for state civil penalties, implement corporate governance reforms, and pay $33.5 million to mitigate excess pollution. The company was also to pay affected consumers up to $280 million and offer extended warranties on such vehicles worth $105 million. The total value of the settlement was about $800 million, though FCA did not admit liability, and it did not resolve an ongoing criminal investigation. In February 2024, Chrysler unveiled a concept for its first electric vehicle, the Chrysler Halcyon, a battery-electric sedan. Corporate governance. , management positions of Stellantis North America include: Sales and marketing. United States sales. Chrysler is the smallest of the "Big Three" U.S. automakers (Stellantis North America, Ford Motor Company, and General Motors). In 2020, FCA US sold just over 1.8 million vehicles. Global sales. Chrysler was the world's 11th largest vehicle manufacturer as ranked by OICA in 2012. Total Chrysler vehicle production was about 2.37 million that year. The company has since become a wholly-owned subsidiary and no longer reports global sales. Marketing. Lifetime powertrain warranty. In 2007, Chrysler began to offer vehicle lifetime powertrain warranty for the first registered owner or retail lessee. The deal covered owner or lessee in U.S., Puerto Rico, and the Virgin Islands, for 2009 model year vehicles, and 2006, 2007, and 2008 model year vehicles purchased on or after July 26, 2007. Covered vehicles excluded SRT models, Diesel vehicles, Sprinter models, Ram Chassis Cab, Hybrid System components (including transmission), and certain fleet vehicles. The warranty is non-transferable. After Chrysler's restructuring, the warranty program was replaced by five-year/100,000 mile transferable warranty for 2010 or later vehicles. "Let's Refuel America". In 2008, as a response to customer feedback citing the prospect of rising gas prices as a top concern, Chrysler launched the "Let's Refuel America" incentive campaign, which guaranteed new-car buyers a gasoline price of $2.99 for three years. With the U.S. purchase of eligible Chrysler, Jeep, and Dodge vehicles, customers could enroll in the program and receive a gas card that immediately lowers their gas price to $2.99 a gallon, and keeps it there for the three years. Lancia co-branding. Chrysler plans for Lancia to codevelop products, with some vehicles being shared. Olivier Francois, Lancia's CEO, was appointed to the Chrysler division in October 2009. Francois plans to reestablish the Chrysler brand as an upscale brand. Ram trucks. In October 2009, Dodge's car and truck lines were separated, with the name "Dodge" being used for cars, minivans, and crossovers and "Ram" for light- and medium-duty trucks and other commercial-use vehicles.<ref name="autoblog.com/2009"></ref> "Imported From Detroit". In 2011, Chrysler unveiled their "Imported From Detroit" campaign with ads featuring Detroit rapper Eminem, one of which aired during the Super Bowl. The campaign highlighted the rejuvenation of the entire product lineup, which included the new, redesigned, and repackaged 2011 model year 200 sedans and 200 convertibles, the Chrysler 300 sedan, and the Chrysler Town & Country minivan. As part of the campaign, Chrysler sold a line of clothing items featuring the Monument to Joe Louis, with proceeds being funneled to Detroit-area charities, including the Boys and Girls Clubs of Southeast Michigan, Habitat for Humanity Detroit and the Marshall Mathers Foundation. In March 2011, Chrysler Group LLC filed a lawsuit against Moda Group LLC (owner of Pure Detroit clothing retailer) for copying and selling merchandise with the "Imported from Detroit" slogan. Chrysler claimed it had notified defendant of its pending trademark application February 14, but the defendant argued Chrysler had not secured a trademark for the "Imported From Detroit" phrase. On June 18, 2011, U.S. District Judge Arthur Tarnow ruled that Chrysler's request did not show that it would suffer irreparable harm or that it had a strong likelihood of winning its case. Therefore, Pure Detroit's owner, Detroit retailer Moda Group LLC, can continue selling its "Imported from Detroit" products. Tarnow also noted that Chrysler does not have a trademark on "Imported from Detroit" and rejected the automaker's argument that trademark law is not applicable to the case. In March 2012, Chrysler Group LLC and Pure Detroit agreed to a March 27 mediation to try to settle the lawsuit over the clothing company's use of "Imported from Detroit" slogan. Pure Detroit stated that Chrysler has made false claims about the origins of three vehicles - Chrysler 200, Chrysler 300 and Chrysler Town & Country - none of which are built in Detroit. Pure Detroit also said that Chrysler's Imported From Detroit merchandise is not being made in Detroit. In 2012 Chrysler and Pure Detroit came to an undisclosed settlement. Chrysler's Jefferson North Assembly, which makes the Jeep Grand Cherokee and Dodge Durango, is the only car manufacturing plant of any company remaining entirely in Detroit (General Motors operates a plant that is partly in Detroit and partly in Hamtramck). In 2011, Eminem settled a lawsuit against Audi alleging the defendant had ripped off the Chrysler 300 Super Bowl commercial in the Audi A6 Avant ad. "Halftime in America". Again in 2012, Chrysler advertised during the Super Bowl. Its two-minute February 5, 2012 Super Bowl XLVI advertisement was titled "Halftime in America". The ad drew criticism from several leading U.S. conservatives, who suggested that its messaging implied that President Barack Obama deserved a second term and, as such, was political payback for Obama's support for the federal bailout of the company. Asked about the criticism in a "60 Minutes" interview with Steve Kroft, Sergio Marchionne responded "just to rectify the record I paid back the loans at 19.7% Interest. I don't think I committed to do to a commercial on top of that" and characterized the Republican reaction as "unnecessary and out of place". America's Import. In 2014, Chrysler started using a new slogan, "America's Import" in ads introducing their all-new 2015 Chrysler 200, targeting foreign automakers from Germany to Japan with such ads (German performance and Japanese quality), and at the ending of selected ads, the advertisement will say, "We Built This", indicating being built in America, instead of overseas. Product line. Chrysler Uconnect. First introduced as MyGig, Chrysler Uconnect is a system that brings interactive ability to the in-car radio and telemetric-like controls to car settings. As of mid-2015, it was installed in hundreds of thousands of Fiat Chrysler vehicles. It connects to the Internet via the mobile network of AT&T, providing the car with its own IP address. Internet connectivity using any Chrysler, Dodge, Jeep or Ram vehicle, via a Wi-Fi "hot-spot", is also available via Uconnect Web. According to Chrysler LLC, the hotspot range extends approximately from the vehicle in all directions, and combines both Wi-Fi and Sprint's 3G cellular connectivity. Uconnect is available on several current and was available on several discontinued Chrysler models including the current Dodge Dart, Chrysler 300, Aspen, Sebring, Town and Country, Dodge Avenger, Caliber, Grand Caravan, Challenger, Charger, Journey, Nitro, and Ram. In July 2015, IT security researchers announced a severe security flaw assumed to affect every Chrysler vehicle with Uconnect produced from late 2013 to early 2015. It allows hackers to gain access to the car over the Internet, and in the case of a Jeep Cherokee was demonstrated to enable an attacker to take control not just of the radio, A/C, and windshield wipers, but also of the car's steering, brakes and transmission. Chrysler published a patch that car owners can download and install via a USB stick, or have a car dealer install for them. Brands. Current and former brands of Stellantis North America: Brand predecessors. United States Motor Company. (1908–1913); reorganized and folded into Maxwell Rootes Group. (1913–1971), UK; minority interest purchased by Chrysler in 1964, progressively taking controlling interest in 1967, renamed Chrysler Europe in 1971 American Motors Corporation. (1954–1988), US; purchased by Chrysler and renamed Jeep-Eagle Division Graham-Paige. (1927–1947), mid-priced cars; purchased by Henry Kaiser and reorganized into Kaiser-Frazer Motors Willys-Overland Motors. (1912–1963) US; acquired by Kaiser Motors, later Kaiser Jeep, then by AMC in 1970 Environmental initiatives. In 1979, Chrysler, in cooperation with the United States Department of Energy, produced an experimental battery electric vehicle, the Chrysler ETV-1. In 1992, Chrysler developed the Dodge EPIC concept minivan. In 1993, Chrysler sold a limited-production electric minivan called the TEVan; only 56 were produced, mostly for electric utilities. A second generation, the EPIC (unrelated to the concept), was released in 1997 and discontinued in 1999. Chrysler once owned the Global Electric Motorcars company, building low-speed neighborhood electric vehicles, but sold GEM to Polaris Industries in 2011. In September 2007, Chrysler established ENVI, an in-house organization focused on electric-drive vehicles and related technologies which was disbanded by late 2009. In August 2009, Chrysler took US$70 million in grants from the U.S. Department of Energy to develop a test fleet of 220 hybrid pickup trucks and minivans. The first hybrid models, the Chrysler Aspen hybrid and the Dodge Durango hybrid, were discontinued a few months after production in 2008, sharing their GM-designed hybrid technology with GM, Daimler and BMW. Chrysler was on the Advisory Council of the PHEV Research Center, and undertook a government sponsored demonstration project with Ram and minivan vehicles. In 2012, FCA CEO Sergio Marchionne stated that Chrysler and Fiat planned to focus primarily on alternative fuels, such as compressed natural gas and Diesel, instead of hybrid and electric drivetrains for their consumer products. Fiat Chrysler bought a total of 8.2 million megagrams of U.S. greenhouse gas emission credits from competitors including Toyota, Honda, Tesla and Nissan for the 2010, 2011, 2013, and 2014 model years. It had the worst fleet average fuel economy among major manufacturers selling in the US from model years 2012–2022. Chrysler Defense. The dedicated tank building division of Chrysler, this division was founded as the Chrysler Tank division in 1940, originally with the intention of providing another production line for the M2 Medium Tank, so that the U.S. Army could more rapidly build up its inventory of the type. Its first plant was the Detroit Arsenal Tank Plant. When the M2A1 was unexpectedly declared obsolete in August of the same year, plans were altered (though not without considerable difficulty) to produce the M3 Grant instead, primarily for the British as part of the United States under the counter support for the United Kingdom against Nazi Germany (the U.S. not yet being formally in the war), with the balance of the revised order going to the U.S. Army as the "Lee". After December 1941 and the United States' entry into the war against the Axis powers, the Tank division rapidly expanded, with new facilities such as the Tank Arsenal Proving Ground at (then) Utica, Michigan. It also quickly widened the range of products it was developing and producing, including the M4 Sherman tank and the Chrysler A57 multibank tank engine. Special programs. During World War II, essentially all of Chrysler's facilities were devoted to building military vehicles (the Jeep brand came later, after Chrysler acquired American Motors Corporation). They were also designing V12 and V16 hemi-engines producing for airplanes, but they did not make it into production as jets were developed and were seen as the future for air travel. During the 1950s Cold War period, Chrysler made air raid sirens powered by its Hemi V-8 engines. Radar antennas. When the Radiation Laboratory at MIT was established in 1941 to develop microwave radars, one of the first projects resulted in the SCR-584, the most widely recognized radar system of the war era. This system included a parabolic antenna six feet in diameter that was mechanically aimed in a helical pattern (round and round as well as up and down). One of Chrysler's most significant contributions to the war effort was in radar technology. For the final production design of this antenna and its highly complex drive mechanism, the Army's Signal Corps Laboratories turned to Chrysler's Central Engineering Office. There, the parabola was changed from aluminum to steel, allowing production to form using standard automotive presses. To keep weight down, 6,000 equally spaced holes were drilled in the face (this had no effect on the radiation pattern). The drive mechanism was completely redesigned, using technology derived from Chrysler's research in automotive gears and differentials. The changes resulted in improved performance, reduced weight, and easier maintenance. A large portion of the Dodge plant was used in building 1,500 of the SCR-584 antennas as well as the vans used in the systems. Missiles. In April 1950, the U.S. Army established the Ordnance Guided Missile Center (OGMC) at Redstone Arsenal, adjacent to Huntsville, Alabama. To form OGMC, over 1,000 civilian and military personnel were transferred from Fort Bliss, Texas. Included was a group of German scientists and engineers led by Wernher von Braun; this group had been brought to America under Project Paperclip. OGMC designed the Army's first short-range ballistic missile, the PGM-11 Redstone, based on the WWII German V-2 missile. Chrysler established the Missile Division to serve as the Redstone prime contractor, setting up an engineering operation in Huntsville and for production obtaining use from the U.S. Navy of a large plant in Sterling Heights, Michigan. The Redstone was in active service from 1958 until 1964; it was also the first missile to test-launch a live nuclear weapon, first detonated in a 1958 test in the South Pacific. Working together, the Missile Division and von Braun's team greatly increased the capability of the Redstone, resulting in the PGM-19 Jupiter, a medium-range ballistic missile. In May 1959, a Jupiter missile launched two small monkeys into space in a nose cone; this was America's first successful flight and recovery of live space payloads. Responsibility for deploying Jupiter missiles was transferred from the Army to the Air Force; armed with nuclear warheads, they were first deployed in Italy and Turkey during the early 1960s. Space boosters. In July 1959, NASA chose the Redstone missile as the basis for the Mercury-Redstone Launch Vehicle to be used for suborbital test flights of the Project Mercury spacecraft. Three uncrewed MRLV launch attempts were made between November 1960 and March 1961, two of which were successful. The MRLV successfully launched the chimpanzee Ham, and astronauts Alan Shepard and Gus Grissom on three suborbital flights in January, May, and July 1961, respectively. America's more ambitious crewed space travel plans included the design of the Saturn series of heavy-lift launch vehicles by a team headed by Wernher von Braun. Chrysler's Huntsville operation, then designated the Space Division, became Marshall Space Flight Center's prime contractor for the first stage of the Saturn I and Saturn IB versions. The design was based on a cluster of Redstone and Jupiter fuel tanks and Chrysler built it for the Apollo program in the Michoud Assembly Facility in East New Orleans, one of the largest manufacturing plants in the world. Between October 1961 and July 1975, NASA used ten Saturn Is and nine Saturn IBs for suborbital and orbital flights, all of which were successful; Chrysler missiles and boosters never suffered a launch failure. The division was also a subcontractor which modified one of the mobile launcher platforms for use with the Saturn IB rockets using Saturn V infrastructure.
6883
46628330
https://en.wikipedia.org/wiki?curid=6883
City of London
The City of London, also known as "the City", is a ceremonial county and local government district with city status in England. It is the historic centre of London, though it only forms a small part of the capital and is administratively separate from the surrounding Greater London metropolis. The City of London had a population of 8,583 at the 2021 census, however over 500,000 people were employed in the area as of 2019. It has an area of , the source of the nickname "the Square Mile". The City is a unique local authority area governed by the City of London Corporation, which is led by the Lord Mayor of the City of London. Together with Canary Wharf and the West End, the City of London forms the primary central business district of London, which is one of the leading financial centres of the world. The Bank of England and the London Stock Exchange are both based in the City. The insurance industry also has a major presence in the area, and the presence of the Inns of Court on the City's western boundary has made it a centre for the legal profession. The present City of London constituted the majority of London from its settlement by the Romans in the 1st century AD to the Middle Ages. It contains several historic sites, including St Paul's Cathedral, Royal Exchange, Mansion House, Old Bailey, and Smithfield Market. History. Origins. The Roman legions established a settlement known as "Londinium" on the current site of the City of London around AD 43. Its bridge over the River Thames turned the city into a road nexus and major port, serving as a major commercial centre in Roman Britain until its abandonment during the 5th century. Archaeologist Leslie Wallace notes that, because extensive archaeological excavation has not revealed any signs of a significant pre-Roman presence, "arguments for a purely Roman foundation of London are now common and uncontroversial." At its height, the Roman city had a population of approximately 45,000–60,000 inhabitants. Londinium was an ethnically diverse city, with inhabitants from across the Roman Empire, including natives of Britannia, continental Europe, the Middle East, and North Africa. The Romans built the London Wall some time between AD 190 and 225. The boundaries of the Roman city were similar to those of the City of London today, though the City extends further west than Londinium's Ludgate, and the Thames was undredged and thus wider than it is today, with Londinium's shoreline slightly north of the city's present shoreline. The Romans built a bridge across the river, as early as AD 50, near to today's London Bridge. Decline. By the time the London Wall was constructed, the city's fortunes were in decline, and it faced problems of plague and fire. The Roman Empire entered a long period of instability and decline, including the Carausian Revolt in Britain. In the 3rd and 4th centuries, the city was under attack from Picts, Scots, and Saxon raiders. The decline continued, both for Londinium and the Empire, and in AD 410 the Romans withdrew entirely from Britain. Many of the Roman public buildings in Londinium by this time had fallen into decay and disuse, and gradually after the formal withdrawal the city became almost (if not, at times, entirely) uninhabited. The centre of trade and population moved away from the walled Londinium to Lundenwic ("London market"), a settlement to the west, roughly in the modern-day Strand/Aldwych/Covent Garden area. Anglo-Saxon restoration. During the Anglo-Saxon Heptarchy, the London area came in turn under the Kingdoms of Essex, Mercia, and later Wessex, though from the mid 8th century it was frequently under threat from raids by different groups including the Vikings. Bede records that in AD 604 St Augustine consecrated Mellitus as the first bishop to the Anglo-Saxon kingdom of the East Saxons and their king, Sæberht. Sæberht's uncle and overlord, Æthelberht, king of Kent, built a church dedicated to St Paul in London, as the seat of the new bishop. It is assumed, although unproven, that this first Anglo-Saxon cathedral stood on the same site as the later medieval and the present cathedrals. Alfred the Great, King of Wessex occupied and began the resettlement of the old Roman walled area, in 886, and appointed his son-in-law Earl Æthelred of Mercia over it as part of their reconquest of the Viking occupied parts of England. The refortified Anglo-Saxon settlement was known as ("London Fort", a borough). The historian Asser said that "Alfred, king of the Anglo-Saxons, restored the city of London splendidly ... and made it habitable once more." Alfred's "restoration" entailed reoccupying and refurbishing the nearly deserted Roman walled city, building quays along the Thames, and laying a new city street plan. Alfred's taking of London and the rebuilding of the old Roman city was a turning point in history, not only as the permanent establishment of the City of London, but also as part of a unifying moment in early England, with Wessex becoming the dominant English kingdom and the repelling (to some degree) of the Viking occupation and raids. While London, and indeed England, were afterwards subjected to further periods of Viking and Danish raids and occupation, the establishment of the City of London and the Kingdom of England prevailed. In the 10th century, Athelstan permitted eight mints to be established, compared with six in his capital, Winchester, indicating the wealth of the city. London Bridge, which had fallen into ruin following the Roman evacuation and abandonment of Londinium, was rebuilt by the Saxons, but was periodically destroyed by Viking raids and storms. As the focus of trade and population was moved back to within the old Roman walls, the older Saxon settlement of Lundenwic was largely abandoned and gained the name of "Ealdwic" (the "old settlement"). The name survives today as Aldwych (the "old market-place"), a name of a street and an area of the City of Westminster between Westminster and the City of London. Medieval era. Following the Battle of Hastings, William the Conqueror marched on London, reaching as far as Southwark, but failed to get across London Bridge or defeat the Londoners. He eventually crossed the River Thames at Wallingford, pillaging the land as he went. Rather than continuing the war, Edgar the Ætheling, Edwin of Mercia and Morcar of Northumbria surrendered at Berkhamsted. William granted the citizens of London a charter in 1075; the city was one of a few examples of the English retaining some authority. The city was not covered by the Domesday Book. William built three castles around the city, to keep Londoners subdued: Around 1132 the City was given the right to appoint its own sheriffs rather than having sheriffs appointed by the monarch. London's chosen sheriffs also served as the sheriffs for the county of Middlesex. This meant that the City and Middlesex were regarded as one administratively for addressing crime and keeping the peace (not that the county was a dependency of the city). London's sheriffs continued to serve Middlesex until the county was given its own sheriffs again following the Local Government Act 1888. By 1141 the whole body of the citizenry was considered to constitute a single community. This 'commune' was the origin of the City of London Corporation and the citizens gained the right to appoint, with the king's consent, a mayor in 1189—and to directly elect the mayor from 1215. From medieval times, the city has been composed of 25 ancient wards, each headed by an alderman, who chairs Wardmotes, which still take place at least annually. A Folkmoot, for the whole of the City held at the outdoor cross of St Paul's Cathedral, was formerly also held. Many of the medieval offices and traditions continue to the present day, demonstrating the unique nature of the City and its Corporation. In 1381, the Peasants' Revolt affected London. The rebels took the City and the Tower of London, but the rebellion ended after its leader, Wat Tyler, was killed during a confrontation that included Lord Mayor William Walworth. In 1450, rebel forces again occupied the City during Jack Cade's Rebellion before being ousted by London citizens following a bloody battle on London Bridge. In 1550, the area south of London Bridge in Southwark came under the control of the City with the establishment of the ward of Bridge Without. The city was burnt severely on a number of occasions, the worst being in 1123 and in the Great Fire of London in 1666. Both of these fires were referred to as "the" Great Fire. After the fire of 1666, a number of plans were drawn up to remodel the city and its street pattern into a renaissance-style city with planned urban blocks, squares and boulevards. These plans were almost entirely not taken up, and the medieval street pattern re-emerged almost intact. Early modern period. In the 1630s the Crown sought to have the Corporation of the City of London extend its jurisdiction to surrounding areas. In what is sometimes called the "great refusal", the Corporation said no to the King, which in part accounts for its unique government structure to the present. By the late 16th century, London increasingly became a major centre for banking, international trade and commerce. The Royal Exchange was founded in 1565 by Sir Thomas Gresham as a centre of commerce for London's merchants, and gained Royal patronage in 1571. Although no longer used for its original purpose, its location at the corner of Cornhill and Threadneedle Street continues to be the geographical centre of the city's core of banking and financial services, with the Bank of England moving to its present site in 1734, opposite the Royal Exchange. Immediately to the south of Cornhill, Lombard Street was the location from 1691 of Lloyd's Coffee House, which became the world-leading insurance market. London's insurance sector continues to be based in the area, particularly in Lime Street. In 1708, Christopher Wren's masterpiece, St Paul's Cathedral, was completed on his birthday. The first service had been held on 2 December 1697, more than 10 years earlier. It replaced the original St Paul's, which had been completely destroyed in the Great Fire of London, and is considered to be one of the finest cathedrals in Britain and a fine example of Baroque architecture. Growth of London. The 18th century was a period of rapid growth for London, reflecting an increasing national population, the early stirrings of the Industrial Revolution, and London's role at the centre of the evolving British Empire. The urban area expanded beyond the borders of the City of London, most notably during this period towards the West End and Westminster. Expansion continued and became more rapid by the beginning of the 19th century, with London growing in all directions. To the East the Port of London grew rapidly during the century, with the construction of many docks, needed as the Thames at the City could not cope with the volume of trade. The arrival of the railways and the Tube meant that London could expand over a much greater area. By the mid-19th century, with London still rapidly expanding in population and area, the City had already become only a small part of the wider metropolis. 19th and 20th centuries. An attempt was made in 1894 with the Royal Commission on the Amalgamation of the City and County of London to end the distinction between the city and the surrounding County of London, but a change of government at Westminster meant the option was not taken up. The city as a distinct polity survived despite its position within the London conurbation and numerous local government reforms. Supporting this status, the city was a special parliamentary borough that elected four members to the unreformed House of Commons, who were retained after the Reform Act 1832; reduced to two under the Redistribution of Seats Act 1885; and ceased to be a separate constituency under the Representation of the People Act 1948. Since then the city is a minority (in terms of population and area) of the Cities of London and Westminster. The city's population fell rapidly in the 19th century and through most of the 20th century, as people moved outwards in all directions to London's vast suburbs, and many residential buildings were demolished to make way for office blocks. Like many areas of London and other British cities, the City fell victim to large scale and highly destructive aerial bombing during World War II, especially in the Blitz. Whilst St Paul's Cathedral survived the onslaught, large swathes of the area did not and the particularly heavy raids of late December 1940 led to a firestorm called the Second Great Fire of London. There was a major rebuilding programme in the decades following the war, in some parts (such as at the Barbican) dramatically altering the urban landscape. But the destruction of the older historic fabric allowed the construction of modern and larger-scale developments, whereas in those parts not so badly affected by bomb damage the City retains its older character of smaller buildings. The street pattern, which is still largely medieval, was altered slightly in places, although there is a more recent trend of reversing some of the post-war modernist changes made, such as at Paternoster Square. The City suffered terrorist attacks including the 1993 Bishopsgate bombing (IRA) and the 7 July 2005 London bombings (Islamist). In response to the 1993 bombing, a system of road barriers, checkpoints and surveillance cameras referred to as the "ring of steel" has been maintained to control entry points to the city. The 1970s saw the construction of tall office buildings including the 600-foot (183 m), 47-storey NatWest Tower, the first skyscraper in the UK. By the 2010s, office space development had intensified in the City, especially in the central, northern and eastern parts, with skyscrapers including 30 St. Mary Axe ("the Gherkin"'), Leadenhall Building ("the Cheesegrater"), 20 Fenchurch Street ("the Walkie-Talkie"), the Broadgate Tower, the Heron Tower and 22 Bishopsgate. The main residential section of the City today is the Barbican Estate, constructed between 1965 and 1976. The Museum of London was based there until March 2023 (due to reopen in West Smithfield in 2026), whilst a number of other services provided by the corporation are still maintained on the Barbican Estate. Governance. The city has a unique political status, a legacy of its uninterrupted integrity as a corporate city since the Anglo-Saxon period and its singular relationship with the Crown. Historically its system of government was not unusual, but it was not reformed by the Municipal Corporations Act 1835 and little changed by later reforms, so that it is the only local government in the UK where elections are not run on the basis of one vote for every adult citizen. It is administered by the City of London Corporation, headed by the Lord Mayor of London (not to be confused with the separate Mayor of London, an office created only in the year 2000), which is responsible for a number of functions and has interests in land beyond the city's boundaries. Unlike other English local authorities, the corporation has two council bodies: the (now largely ceremonial) Court of Aldermen and the Court of Common Council. The Court of Aldermen represents the wards, with each ward (irrespective of size) returning one alderman. The chief executive of the Corporation holds the ancient office of Town Clerk of London. The city is a ceremonial county which has a Commission of Lieutenancy headed by the Lord Mayor instead of a Lord-Lieutenant and has two Sheriffs instead of a High Sheriff (see list of Sheriffs of London), quasi-judicial offices appointed by the livery companies, an ancient political system based on the representation and protection of trades (guilds). Senior members of the livery companies are known as liverymen and form the Common Hall, which chooses the lord mayor, the sheriffs and certain other officers. Wards. The city is made up of 25 wards. They are survivors of the medieval government system that allowed a very local area to exist as a self-governing unit within the wider city. They can be described as electoral/political divisions; ceremonial, geographic and administrative entities; sub-divisions of the city. Each ward has an Alderman, who until the mid-1960s held office for life but since put themselves up for re-election at least every 6 years, and are the only directly elected Aldermen in the United Kingdom. Wards continue to have a Beadle, an ancient position which is now largely ceremonial whose main remaining function is the running of an annual Wardmote of electors, representatives and officials. At the Wardmote the ward's Alderman appoints at least one Deputy for the year ahead, and Wardmotes are also held during elections. Each ward also has a Ward Club, which is similar to a residents' association. The wards are ancient and their number has changed three times since time immemorial: Following boundary changes in 1994, and later reform of the business vote in the city, there was a major boundary and electoral representation revision of the wards in 2003, and they were reviewed again in 2010 for change in 2013, though not to such a dramatic extent. The review was conducted by senior officers of the corporation and senior judges of the Old Bailey; the wards are reviewed by this process to avoid malapportionment. The procedure of review is unique in the United Kingdom as it is not conducted by the Electoral Commission or a local government boundary commission every 8 to 12 years, which is the case for all other wards in Great Britain. Particular churches, livery company halls and other historic buildings and structures are associated with a ward, such as St Paul's Cathedral with Castle Baynard, and London Bridge with Bridge; boundary changes in 2003 removed some of these historic connections. Each ward elects an alderman to the Court of Aldermen, and commoners (the City equivalent of a councillor) to the Court of Common Council of the corporation. Only electors who are Freemen of the City of London are eligible to stand. The number of commoners a ward sends to the Common Council varies from two to ten, depending on the number of electors in each ward. Since the 2003 review it has been agreed that the four more residential wards: Portsoken, Queenhithe, Aldersgate and Cripplegate together elect 20 of the 100 commoners, whereas the business-dominated remainder elect the remaining 80 commoners. 2003 and 2013 boundary changes have increased the residential emphasis of the mentioned four wards. Census data provides eight nominal rather than 25 real wards, all of varying size and population. Being subject to renaming and definition at any time, these census 'wards' are notable in that four of the eight wards accounted for 67% of the 'square mile' and held 86% of the population, and these were in fact similar to and named after four City of London wards: Elections. The city has a unique electoral system. Most of its voters are representatives of businesses and other bodies that occupy premises in the city. Its ancient wards have very unequal numbers of voters. In elections, both the businesses based in the city and the residents of the City vote. The City of London Corporation was not reformed by the Municipal Corporations Act 1835, because it had a more extensive electoral franchise than any other borough or city; in fact, it widened this further with its own equivalent legislation allowing one to become a freeman without being a liveryman. In 1801, the city had a population of about 130,000, but increasing development of the city as a central business district led to this falling to below 5,000 after the Second World War. It has risen slightly to around 9,000 since, largely due to the development of the Barbican Estate. In 2009, the business vote was about 24,000, greatly exceeding residential voters. As the City of London Corporation has not been affected by other municipal legislation over the period of time since then, its electoral practice has become increasingly anomalous. Uniquely for city or borough elections, its elections remain independent-dominated. The business or "non-residential vote" was abolished in other UK local council elections by the Representation of the People Act 1969, but was preserved in the City of London. The principal reason given by successive UK governments for retaining this mechanism for giving businesses representation, is that the city is "primarily a place for doing business". About 330,000 non-residents constitute the day-time population and use most of its services, far outnumbering residents, who number around 7,000 (2011). By contrast, opponents of the retention of the business vote argue that it is a cause of institutional inertia. The City of London (Ward Elections) Act 2002 (c. vi), a local act of Parliament, reformed the voting system and greatly increased the business franchise, allowing many more businesses to be represented. Under the new system, the number of non-resident voters has doubled from 16,000 to 32,000. Previously disenfranchised firms (and other organisations) are entitled to nominate voters, in addition to those already represented, and all such bodies are now required to choose their voters in a representative fashion. Bodies employing fewer than 10 people may appoint 1 voter; those employing 10 to 50 people 1 voter for every 5 employees; those employing more than 50 people 10 voters and 1 additional voter for each 50 employees beyond the first 50. The Act also changed other aspects of an earlier act relating to elections in the city, from 1957. The Temple. Inner Temple and Middle Temple (which neighbour each other) in the western ward of Farringdon Without are within the boundaries and liberties of the City, but can be thought of as independent enclaves. They are two of the few remaining liberties, an old name for a geographic division with special rights. They are extra-parochial areas, historically not governed by the City of London Corporation (and are today regarded as local authorities for most purposes) and equally outside the ecclesiastical jurisdiction of the Bishop of London. Other functions. Within the city, the Corporation owns and runs both Smithfield Market and Leadenhall Market. It owns land beyond its boundaries, including open spaces (parks, forests and commons) in and around Greater London, including most of Epping Forest and Hampstead Heath. The Corporation owns Old Spitalfields Market and Billingsgate Fish Market, in the neighbouring London Borough of Tower Hamlets. It owns and helps fund the Old Bailey, the Central Criminal Court for England and Wales, as a gift to the nation, having begun as the City and Middlesex Sessions. The Honourable The Irish Society, a body closely linked with the corporation, also owns many public spaces in Northern Ireland. The city has its own independent police force, the City of London Police—the Common Council (the main body of the corporation) is the police authority. The corporation also run the Hampstead Heath Constabulary, Epping Forest Keepers and the City of London market constabularies (whose members are no longer attested as constables but retain the historic title). The majority of Greater London is policed by the Metropolitan Police Service, based at New Scotland Yard. The city has one hospital, St Bartholomew's Hospital, also known as 'Barts'. Founded in 1123, it is located at Smithfield, and is undergoing a long-awaited regeneration after doubts as to its continuing use during the 1990s. The city is the third largest UK patron of the arts. It oversees the Barbican Centre and subsidises several important performing arts companies. The London Port Health Authority, which is the responsibility of the corporation, is responsible for all port health functions on the tidal part of the Thames, including the Port of London and related seaports, and London City Airport. The Corporation oversees the Bridge House Estates, which maintains Blackfriars Bridge, Millennium Bridge, Southwark Bridge, London Bridge and Tower Bridge. The City's flag flies over Tower Bridge, although neither footing is in the city. The boundary of the City. The size of the city was constrained by a defensive perimeter wall, known as London Wall, which was built by the Romans in the late 2nd century to protect their strategic port city. However the boundaries of the City of London no longer coincide with the old city wall, as the City expanded its jurisdiction slightly over time. During the medieval era, the city's jurisdiction expanded westwards, crossing the historic western border of the original settlement—the River Fleet—along Fleet Street to Temple Bar. The city also took in the other "City bars" which were situated just beyond the old walled area, such as at Holborn, Aldersgate, West Smithfield, Bishopsgate and Aldgate. These were the important entrances to the city and their control was vital in maintaining the city's special privileges over certain trades. Most of the wall has disappeared, but several sections remain visible. A section near the Museum of London was revealed after the devastation of an air raid on 29 December 1940 at the height of the Blitz. Other visible sections are at St Alphage, and there are two sections near the Tower of London. The River Fleet was canalised after the Great Fire of 1666 and then in stages was bricked up and has been since the 18th century one of London's "lost rivers or streams", today underground as a storm drain. The boundary of the city was unchanged until minor boundary changes on 1 April 1994, when it expanded slightly to the west, north and east, taking small parcels of land from the London Boroughs of Westminster, Camden, Islington, Hackney and Tower Hamlets. The main purpose of these changes was to tidy up the boundary where it had been rendered obsolete by changes in the urban landscape. In this process the city also lost small parcels of land, though there was an overall net gain (the City grew from 1.05 to 1.12 square miles). Most notably, the changes placed the (then recently developed) Broadgate estate entirely in the city. Southwark, to the south of the city on the other side of the Thames, was within the City between 1550 and 1899 as the Ward of Bridge Without, a situation connected with the Guildable Manor. The city's administrative responsibility there had in practice disappeared by the mid-Victorian period as various aspects of metropolitan government were extended into the neighbouring areas. Today it is part of the London Borough of Southwark. The Tower of London has always been outside the city and comes under the London Borough of Tower Hamlets. Arms, motto and flag. The Corporation of the City of London has a full achievement of armorial bearings consisting of a shield on which the arms are displayed, a crest displayed on a helm above the shield, supporters on either side and a motto displayed on a scroll beneath the arms. The coat of arms is "anciently recorded" at the College of Arms. The arms consist of a silver shield bearing a red cross with a red upright sword in the first quarter. They combine the emblems of the patron saints of England and London: the Cross of St George with the symbol of the martyrdom of Saint Paul. The sword is often erroneously supposed to commemorate the killing of Peasants' Revolt leader Wat Tyler by Lord Mayor of London William Walworth. However the arms were in use some months before Tyler's death, and the tradition that Walworth's dagger is depicted may date from the late 17th century. The Latin motto of the city is "Domine dirige nos", which translates as "Lord, direct us". It is thought to have been adopted in the 17th century, as the earliest record of it is in 1633. A banner of the arms (the design on the shield) is flown as a flag. Geography. The City of London is the smallest ceremonial county of England by area and population, and the fourth most densely populated. Of the 326 English districts, it is the second smallest by population, after the Isles of Scilly, and the smallest by area. It is also the smallest English city by population (and in Britain, only two cities in Wales are smaller), and the smallest in the UK by area. The elevation of the City ranges from sea level at the Thames to at the junction of High Holborn and Chancery Lane. Two small but notable hills are within the historic core, Ludgate Hill to the west and Cornhill to the east. Between them ran the Walbrook, one of the many "lost" rivers or streams of London (another is the Fleet). Boundary. Beginning in the west, where the City borders Westminster, the boundary crosses the Victoria Embankment from the Thames, passes to the west of Middle Temple, then turns for a short distance along the Strand and near Temple Bar then north up Chancery Lane, where it borders Camden. It turns east along Holborn to Holborn Circus and then goes northeast to Charterhouse Street. As it crosses Farringdon Road it becomes the boundary with Islington. It continues to Aldersgate, goes north, and turns east into some back streets soon after Aldersgate becomes Goswell Road, since 1994 embracing all of the corporation's Golden Lane Estate. Here, at Baltic Street West, is the most northerly extent. The boundary includes all of the Barbican Estate and continues east along Ropemaker Street and its continuation on the other side of Moorgate, becomes South Place. It goes north, reaching the border with Hackney, then east, north, east on back streets, with Worship Street forming a northern boundary, so as to include the Broadgate estate. The boundary then turns south at Norton Folgate and becomes the border with Tower Hamlets. It continues south into Bishopsgate, and takes some backstreets to Middlesex Street (Petticoat Lane) where it continues south-east then south. It then turns south-west, crossing the Minories so as to exclude the Tower of London, and then reaches the Thames. The boundary then runs up the centre of the low-tide channel of the Thames, with the exception that Blackfriars Bridge (including the river beneath and land at its south end) is entirely part of the City, making the City and Borough of Richmond upon Thames the only London districts to span north and south of the river. The span and southern abutment of London Bridge is part of the city for some purposes (and as such is part of Bridge ward). The boundaries are marked by black bollards bearing the city's emblem, and by dragon boundary marks at major entrances, such as Holborn and the south end of London Bridge. A more substantial monument marks the boundary at Temple Bar on Fleet Street. In some places, the financial district extends slightly beyond the boundaries, notably to the north and east, into the London boroughs of Tower Hamlets, Hackney and Islington, and informally these locations are regarded as being part of the "Square Mile". Since the 1990s the eastern fringe, extending into Hackney and Tower Hamlets, has increasingly been a focus for large office developments due to the availability of large sites compared to within the city. Gardens and public art. The city has no sizeable parks within its boundary, but does have a network of a large number of gardens and small open spaces, many of them maintained by the corporation. These range from formal gardens such as the one in Finsbury Circus, containing a bowling green and bandstand, to churchyards such as St Olave Hart Street, to water features and artwork in courtyards and pedestrianised lanes. Gardens include: There are a number of private gardens and open spaces, often within courtyards of the larger commercial developments. Two of the largest are those of the Inner Temple and Middle Temple Inns of Court, in the far southwest. The Thames and its riverside walks are increasingly being valued as open space and in recent years efforts have been made to increase the ability for pedestrians to access and walk along the river. Climate. The nearest weather station has historically been the London Weather Centre at Kingsway/ Holborn, although observations ceased in 2010. Now St. James Park provides the nearest official readings. The city has an oceanic climate (Köppen "Cfb") modified by the urban heat island in the centre of London. This generally causes higher night-time minima than outlying areas. For example, the August mean minimum of compares to a figure of for Greenwich and Heathrow whereas is at Wisley in the middle of several square miles of Metropolitan Green Belt. All figures refer to the observation period 1971–2000. Accordingly, the weather station holds the record for the UK's warmest overnight minimum temperature, , recorded on 4 August 1990. The maximum is , set on 10 August 2003. The absolute minimum for the weather station is a mere , compared to readings around towards the edges of London. Unusually, this temperature was during a windy and snowy cold spell (mid-January 1987), rather than a cold clear night—cold air drainage is arrested due to the vast urban area surrounding the city. The station holds the record for the highest British mean monthly temperature, (mean maximum , mean minimum during July 2006). However, in terms of daytime maximum temperatures, Cambridge NIAB and Botanical Gardens with a mean maximum of , and Heathrow with all exceeded this. Public services. Police and security. The city is a police area and has its own police force, the City of London Police, separate from the Metropolitan Police Service covering the majority of Greater London. The City Police previously had three police stations, at Snow Hill, Wood Street and Bishopsgate. They now only retain Bishopsgate along with an administrative headquarters at Guildhall Yard East. The force comprises 735 police officers including 273 detectives. It is the smallest territorial police force in England and Wales, in both geographic area and the number of police officers. Where the majority of British police forces have silver-coloured badges, those of the City of London Police are black and gold featuring the City crest. The force has rare red and white chequered cap bands and unique red and white striped duty arm bands on the sleeves of the tunics of constables and sergeants (red and white being the colours of the city), which in most other British police forces are black and white. City police sergeants and constables wear crested custodian helmets whilst on foot patrol. These helmets do not feature either St Edward's Crown or the Brunswick Star, which are used on most other police helmets in England and Wales. The city's position as the United Kingdom's financial centre and a critical part of the country's economy, contributing about 2.5% of the UK's gross national product, has resulted in it becoming a target for political violence. The Provisional IRA exploded several bombs in the early 1990s, including the 1993 Bishopsgate bombing. The area is also spoken of as a possible target for al-Qaeda. For instance, when in May 2004 the BBC's "Panorama" programme examined the preparedness of Britain's emergency services for a terrorist attack on the scale of the 11 September 2001 attacks, they simulated a chemical explosion on Bishopsgate in the east of the city. The "Ring of Steel" was established in the wake of the IRA bombings to guard against terrorist threats. Fire brigade. The city has fire risks in many historic buildings, including St Paul's Cathedral, Old Bailey, Mansion House, Smithfield Market, the Guildhall, and also in numerous high-rise buildings. There is one London Fire Brigade station in the city, at Dowgate, with one pumping appliance. The City relies upon stations in the surrounding London boroughs to support it at some incidents. The first fire engine is in attendance in roughly five minutes on average, the second when required in a little over five and a half minutes. There were 1,814 incidents attended in the City in 2006/2007—the lowest in Greater London. No-one died in an event arising from a fire in the four years prior to loc Power. There is a power station located in Charterhouse Street that also provides heat to some of the surrounding buildings. Demography. The Office for National Statistics recorded the population in 2011 as 7,375; slightly higher than in the previous census, 2001, and estimates the population as at mid-2016 to be 9,401. At the 2001 census the ethnic composition was 84.6% White, 6.8% South Asian, 2.6% Black, 2.3% Mixed, 2.0% Chinese and 1.7% were listed as "other". The population was between 120,000 and 140,000 in the first half of the 19th century, decreasing dramatically from 1851 to 1991, with a small increase between 1991 and 2001. The only notable boundary change since the first census in 1801 occurred in 1994. The city's full-time working residents have much higher gross weekly pay than in London and Great Britain (England, Wales and Scotland): £773.30 compared to £598.60 and £491.00 respectively. There is a large inequality of income between genders (£1,085.90 in men compared to £653.50 in women), and this can be explained by job type and length of employment respectively. The 2001 Census showed the city as a unique district amongst 376 districts surveyed in England and Wales. The city had the highest proportional population increase, one-person households, people with qualifications at degree level or higher and the highest indications of overcrowding. It recorded the lowest proportion of households with cars or vans, people who travel to work by car, married couple households and the lowest average household size: just 1.58 people. It also ranked highest within the Greater London area for the percentage of people with no religion and people who are employed. Economy. The City of London vies with New York City's Lower Manhattan for the distinction of the world's pre-eminent financial centre. The London Stock Exchange (shares and bonds), Lloyd's of London (insurance) and the Bank of England are all based in the city. Over 500 banks have offices in the city. The Alternative Investment Market, a market for trades in equities of smaller firms, is a recent development. In 2009, the City of London accounted for 2.4% of UK GDP. London's foreign exchange market has been described by Reuters as 'the crown jewel of London's financial sector'. Of the $3.98 trillion daily global turnover, as measured in 2009, trading in London accounted for around $1.85 trillion, or 46.7% of the total. The pound sterling, the currency of the United Kingdom, is globally the fourth-most traded currency and the fourth most held reserve currency. Canary Wharf, a few miles east of the City in Tower Hamlets, which houses many banks and other institutions formerly located in the Square Mile, has since 1991 become another centre for London's financial services industry. Although growth has continued in both locations, and there have been relocations in both directions, the Corporation has come to realise that its planning policies may have been causing financial firms to choose Canary Wharf as a location. In 2022, 12.3% of City of London residents had been granted non-domicile status in order to avoid their paying tax in the UK. Headquarters. Many major global companies have their headquarters in the city, including Aviva, BT Group, Lloyds Banking Group, Quilter, Prudential, Schroders, Standard Chartered, and Unilever. A number of the world's largest law firms are headquartered in the city, including four of the Magic Circle law firms (Allen & Overy, Freshfields Bruckhaus Deringer, Linklaters and Slaughter & May), as well as other firms such as Ashurst, DLA Piper, Eversheds Sutherland, Herbert Smith Freehills and Hogan Lovells. Other sectors. Whilst the financial sector, and related businesses and institutions, continue to dominate, the economy is not limited to that sector. The legal profession has a strong presence, especially in the west and north (i.e., towards the Inns of Court). Retail businesses were once important, but have gradually moved to the West End of London, though it is now Corporation policy to encourage retailing in some locations, for example at Cheapside near St Paul's. The city has a number of visitor attractions, mainly based on its historic heritage as well as the Barbican Centre and adjacent Museum of London, though tourism is not at present a major contributor to the city's economy or character. The city has many pubs, bars and restaurants, and the "night-time" economy does feature in the Bishopsgate area, towards Shoreditch. The meat market at Smithfield, wholly within the city, continues to be one of London's main markets (the only one remaining in central London) and the country's largest meat market. In the east is Leadenhall Market, a fresh food market that is also a visitor attraction. Retail and residential. The trend for purely office development is beginning to reverse as the Corporation encourages residential use, albeit with development occurring when it arises on windfall sites. The city has a target of 90 additional dwellings per year. Some of the extra accommodation is in small pre-World War II listed buildings, which are not suitable for occupation by the large companies which now provide much of the city's employment. Recent residential developments include "the Heron", a high-rise residential building on the Milton Court site adjacent to the Barbican, and the Heron Plaza development on Bishopsgate is also expected to include residential parts. Since the 1990s, the City has diversified away from near exclusive office use in other ways. For example, several hotels and the first department store opened in the 2000s. A shopping centre was more recently opened at One New Change, Cheapside (near St Paul's Cathedral) in October 2010, which is open seven days a week. However, large sections remain quiet at weekends, especially in the eastern section, and it is quite common to find shops, pubs and cafes closed on these days. Landmarks. Historic buildings. Fire, bombing and post-World War II redevelopment have meant that the city, despite its history, has fewer intact historic structures than one might expect. Nonetheless, there remain many dozens of (mostly Victorian and Edwardian) fine buildings, typically in historicist and neoclassical style. They include the Monument to the Great Fire of London ("the Monument"), St Paul's Cathedral, the Guildhall, the Royal Exchange, Dr. Johnson's House, Mansion House and a , many designed by Sir Christopher Wren, who also designed St Paul's. Prince Henry's Room and 2 King's Bench Walk are notable historic survivors of heavy bombing of the Temple area, which has largely been rebuilt to its historic form. Another example of a bomb-damaged place having been restored is Staple Inn on Holborn. A few small sections of the Roman London Wall exist, for example near the Tower of London and in the Barbican area. Among the twentieth-century listed buildings are Bracken House, the first post World War II buildings in the country to be given statutory protection, and the whole of the Barbican and Golden Lane Estate. The Tower of London is not in the city, but is a notable visitor attraction which brings tourists to the southeast of the city. Other landmark buildings with historical significance include the Bank of England, the Old Bailey, the Custom House, Smithfield Market, Leadenhall Market and St Bartholomew's Hospital. Noteworthy contemporary buildings include a number of modern high-rise buildings (see section below) as well as the Lloyd's building. Skyscrapers and tall buildings. A growing number of tall buildings and skyscrapers are principally used by the financial sector. Almost all are situated in the eastern side around Bishopsgate, Leadenhall Street and Fenchurch Street, in the financial core of the city. In the north there is a smaller cluster comprising the Barbican Estate's three tall residential towers and the commercial CityPoint tower. In 2007, the tall Drapers' Gardens building was demolished and replaced by a shorter tower. The city's buildings of at least in height are: The timeline of the tallest building in the city is as follows: Transport. Rail and Tube. The city is well served by the London Underground ("tube") and National Rail networks. Seven London Underground lines serve the city; the underground stations include: In addition, Aldgate East ( ), Farringdon ( ), Temple ( ) and Tower Hill ( ) tube stations are all situated within metres of the City of London boundary. The Docklands Light Railway (DLR ) has two termini in the city: Bank and Tower Gateway. The DLR links the City directly to the East End. Destinations include Canary Wharf and London City Airport. The Elizabeth line (constructed by the Crossrail project) runs east–west underneath the City of London. The line serves two stations in or very near the City – Farringdon and Liverpool Street – which additionally serves the Barbican and Moorgate areas. Elizabeth line services link the City directly to destinations such as Canary Wharf, Heathrow Airport, and the M4 Corridor high-technology hub (serving Slough and Reading). The city is served by a frequent Thameslink rail service which runs north–south through London. Thameslink services call at Farringdon, City Thameslink, and London Blackfriars. This provides the city with a direct link to key destinations across London, including Elephant & Castle, London Bridge, and St Pancras International (for the Eurostar to mainland Europe). There are also regular, direct trains from these stations to major destinations across East Anglia and the South East, including Bedford, Brighton, Cambridge, Gatwick Airport, Luton Airport, and Peterborough. There are several "London Terminals" in the city: All stations in the city are in London fare zone 1. Road. The national A1, A10 A3, A4, and A40 road routes begin in the city. The city is in the London congestion charge zone, with the small exception on the eastern boundary of the sections of the A1210/A1211 that are part of the Inner Ring Road. The following bridges, listed west to east (downstream), cross the River Thames: Blackfriars Bridge, Blackfriars Railway Bridge, Millennium Bridge (footbridge), Southwark Bridge, Cannon Street Railway Bridge and London Bridge; Tower Bridge is not in the city. The city, like most of central London, is well served by buses, including night buses. Two bus stations are in the city, at Aldgate on the eastern boundary with Tower Hamlets, and at Liverpool Street by the railway station. However although the London Road Traffic Act 1924 removed from existing local authorities the powers to prevent the development of road passengers transport services within the London Metropolitan Area, the City of London retained most such powers. As a consequence, neither Trolleybus nor Green Line Coach services were permitted to enter the City to pick up or set down passengers. Hence the building of Aldgate (Minories) Trolleybus and Coach station as well as the complex terminal arrangements at Parliament Hill Fields. This restriction was removed by the Transport Act 1985. Cycling. Cycling infrastructure in the city is maintained by the City of London Corporation and Transport for London (TfL). The Sandander Cycles and Beryl bike sharing systems operate in the City of London. River. One London River Services pier is on the Thames in the city, Blackfriars Millennium Pier, though the Tower Millennium Pier lies adjacent to the boundary near the Tower of London. One of the Port of London's 25 safeguarded wharves, Walbrook Wharf, is adjacent to Cannon Street station, and is used by the corporation to transfer waste via the river. Swan Lane Pier, just upstream of London Bridge, is proposed to be replaced and upgraded for regular passenger services, planned to take place in 2012–2015. Before then, Tower Pier is to be extended. There is a public riverside walk along the river bank, part of the Thames Path, which opened in stages – the route within the city was completed by the opening of a stretch at Queenhithe in 2023. The walk along Walbrook Wharf is closed to pedestrians when waste is being transferred onto barges. Travel to work (by residents). According to a survey conducted in March 2011, the methods by which employed residents 16–74 get to work varied widely: 48.4% go on foot; 19.5% via light rail, (i.e. the Underground, DLR, etc.); 9.2% work mainly from home; 5.8% take the train; 5.6% travel by bus, minibus, or coach; and 5.3% go by bicycle; with just 3.4% commuting by car or van, as driver or passenger. Education. The city is home to a number of higher education institutions including: the Guildhall School of Music and Drama, the Cass Business School, The London Institute of Banking & Finance and parts of three of the universities in London: the Maughan Library of King's College London on Chancery Lane, the business school of London Metropolitan University, and a campus of the University of Chicago Booth School of Business. The College of Law has its London campus in Moorgate. Part of Barts and The London School of Medicine and Dentistry is on the Barts hospital site at West Smithfield. The city has only one directly maintained primary school, The Aldgate School (formerly Sir John Cass's Foundation Primary School) at Aldgate (ages 4 to 11). It is a Voluntary-Aided (VA) Church of England school, maintained by the Education Service of the City of London. City residents send their children to schools in neighbouring Local Education Authorities, such as Islington, Tower Hamlets, Westminster and Southwark. The City controls three independent schools, City of London School (a boys' school) and City of London School for Girls in the city, and the City of London Freemen's School (co-educational day and boarding) in Ashtead, Surrey. The City of London School for Girls and City of London Freemen's School have their own preparatory departments for entrance at age seven. It is the principal sponsor of The City Academy, Hackney, City of London Academy Islington, and City of London Academy, Southwark. Public libraries. Libraries operated by the Corporation include three lending libraries; Barbican Library, Shoe Lane Library and Artizan Street Library and Community Centre. Membership is open to all – with one official proof of address required to join. Guildhall Library, and City Business Library are also public reference libraries, specialising in the history of London and business reference resources. Money laundering. The City of London's role in illicit financial activity such as money laundering has earned the financial hub sobriquets such as 'The Laundromat' and 'Londongrad'. In May 2024, the UK's then deputy foreign secretary, Andrew Mitchell, said that 40% of the dirty money in the world goes through London and crown dependencies.
6884
48523215
https://en.wikipedia.org/wiki?curid=6884
Clitoris
In amniotes, the clitoris ( or ; : clitorises or clitorides) is a female sex organ. In humans, it is the vulva's most erogenous area and generally the primary anatomical source of female sexual pleasure. The clitoris is a complex structure, and its size and sensitivity can vary. The visible portion, the glans, of the clitoris is typically roughly the size and shape of a pea and is estimated to have at least 8,000 nerve endings.<ref name="ohsu/10-000-nerve"> Sexological, medical, and psychological debate has focused on the clitoris, and it has been subject to social constructionist analyses and studies. Such discussions range from anatomical accuracy, gender inequality, female genital mutilation, and orgasmic factors and their physiological explanation for the G-spot. The only known purpose of the human clitoris is to provide sexual pleasure. Knowledge of the clitoris is significantly affected by its cultural perceptions. Studies suggest that knowledge of its existence and anatomy is scant in comparison with that of other sexual organs (especially male sex organs) and that more education about it could help alleviate stigmas, such as the idea that the clitoris and vulva in general are visually unappealing or that female masturbation is taboo and disgraceful. The clitoris is homologous to the penis in males. Etymology and terminology. The Oxford English Dictionary states that the Neo-Latin word "clītoris" likely has its origin in the Ancient Greek (), which means "little hill", and perhaps derived from the verb (), meaning "to shut" or "to sheathe". "Clitoris" is also related to the Greek word (), "key", "indicating that the ancient anatomists considered it the key" to female sexuality. In addition, the Online Etymology Dictionary suggests other Greek candidates for this word's etymology include a noun meaning "latch" or "hook" or a verb meaning "to touch or titillate lasciviously", "to tickle". The Oxford English Dictionary also states that the colloquially shortened form "clit", the first occurrence of which was noted in the United States, has been used in print since 1958: until then, the common abbreviation was "clitty". Other slang terms for clitoris are "bean", "nub", and "love button". The term is commonly used to refer to the glans alone. In recent anatomical works, the clitoris has also been referred to as the bulbo-clitoral organ. Structure. Most of the clitoris is composed of internal parts. Regarding humans, it consists of the glans, the body (which is composed of two erectile structures known as the corpora cavernosa), the prepuce, and the root. The frenulum is beneath the glans. Research indicates that clitoral tissue extends into the vaginal anterior wall. Şenaylı et al. said that the histological evaluation of the clitoris, "especially of the corpora cavernosa, is incomplete because for many years the clitoris was considered a rudimentary and nonfunctional organ". They added that Baskin and colleagues examined the clitoris' masculinization after dissection and using imaging software after Masson's trichrome staining, put the serial dissected specimens together; this revealed that nerves surround the whole clitoral body. The clitoris, its bulbs, labia minora, and urethra involve two histologically distinct types of vascular tissue (tissue related to blood vessels), the first of which is trabeculated, erectile tissue innervated by the cavernous nerves. The trabeculated tissue has a spongy appearance; along with blood, it fills the large, dilated vascular spaces of the clitoris and the bulbs. Beneath the epithelium of the vascular areas is smooth muscle. As indicated by Yang etal.'s research, it may also be that the urethral lumen (the inner open space or cavity of the urethra), which is surrounded by a spongy tissue, has tissue that "is grossly distinct from the vascular tissue of the clitoris and bulbs, and on macroscopic observation, is paler than the dark tissue" of the clitoris and bulbs. The second type of vascular tissue is non-erectile, which may consist of blood vessels that are dispersed within a fibrous matrix and have only a minimal amount of smooth muscle. Glans. Highly innervated, the clitoral glans ("glans" means "acorn" in Latin), also known as the "head" or "tip", exists at the top of the clitoral body as a fibro-vascular cap and is usually the size and shape of a pea, although it is sometimes much larger or smaller. The glans is separated from the clitoral body by a ridge of tissue called the "corona". The clitoral glans is estimated to have 8,000 and possibly 10,000 or more sensory nerve endings, making it the most sensitive erogenous zone. The glans also has numerous genital corpuscles. Research conflicts on whether the glans is composed of erectile or non-erectile tissue. Some sources describe the clitoral glans and labia minora as composed of non-erectile tissue; this is especially the case for the glans. They state that the clitoral glans and labia minora have blood vessels that are dispersed within a fibrous matrix and have only a minimal amount of smooth muscle, or that the clitoral glans is "a midline, densely neural, non-erectile structure". The clitoral glans is homologous to the male penile glans. Other descriptions of the glans assert that it is composed of erectile tissue and that erectile tissue is present within the labia minora. The glans may be noted as having glandular vascular spaces that are not as prominent as those in the clitoral body, with the spaces being separated more by smooth muscle than in the body and crura. Adipose tissue is absent in the labia minora, but the organ may be described as being made up of dense connective tissue, erectile tissue and elastic fibers. Frenulum. The clitoral frenulum or frenum (frenulum clitoridis and crus glandis clitoridis in Latin; the former meaning "little bridle") is a medial band of tissue formed between the undersurface of the glans and the top ends of the labia minora. It is homologous to the penile frenulum in males. The frenulum's main function is to maintain the clitoris in its innate position. Body. The clitoral body (also known as the shaft of the clitoris) is a portion behind the glans that contains the union of the corpora cavernosa, a pair of sponge-like regions of erectile tissue that hold most of the blood in the clitoris during erection. It is homologous to the penile shaft in the male. The two corpora forming the clitoral body are surrounded by thick fibro-elastic tunica albuginea, a sheath of connective tissue. These corpora are separated incompletely from each other in the midline by a fibrous pectiniform septuma comblike band of connective tissue extending between the corpora cavernosa. The clitoral body is also connected to the pubic symphysis by the suspensory ligament. The body of the clitoris is a bent shape, which makes the clitoral angle or elbow. The angle divides the body into the ascending part (internal) near the pubic symphysis and the descending part (external), which can be seen and felt through the clitoral hood. Root. Lying in the perineum (space between the vulva and anus) and within the superficial perineal pouch is the root of the clitoris, which consists of the posterior ends of the clitoris, the crura and the bulbs of vestibule. The crura ("legs") are the parts of the corpora cavernosa extending from the clitoral body and form an upside-down "V" shape. Each crus (singular form of crura) is attached to the corresponding ischial ramusextensions of the corpora beneath the descending pubic rami. Concealed behind the labia minora, the crura end with attachment at or just below the middle of the pubic arch. Associated are the urethral sponge, perineal sponge, a network of nerves and blood vessels, the suspensory ligament of the clitoris, muscles and the pelvic floor. The vestibular bulbs are more closely related to the clitoris than the vestibule because of the similarity of the trabecular and erectile tissue within the clitoris and its bulbs, and the absence of trabecular tissue in other parts of the vulva, with the erectile tissue's trabecular nature allowing engorgement and expansion during sexual arousal. The vestibular bulbs are typically described as lying close to the crura on either side of the vaginal opening; internally, they are beneath the labia majora. The anterior sections of the bulbs unite to create the bulbar commissure, which forms a long strip of erectile tissue dubbed the infra-corporeal residual spongy part (RSP) that expands from the ventral shaft and terminates as the glans. The RSP is also connected to the shaft via the pars intermedia (venous plexus of Kobelt). When engorged with blood, the bulbs cuff the vaginal opening and cause the vulva to expand outward. Although several texts state that they surround the vaginal opening, Ginger etal. state that this does not appear to be the case and tunica albuginea does not envelop the erectile tissue of the bulbs. In Yang et al.'s assessment of the bulbs' anatomy, they conclude that the bulbs "arch over the distal urethra, outlining what might be appropriately called the 'bulbar urethra' in women". Hood. The clitoral hood or prepuce projects at the front of the labia commissure, where the edges of the labia majora meet at the base of the pubic mound. It is partially formed by fusion of the upper labia minora. The hood's function is to cover and protect the glans and external shaft. There is considerable variation in how much of the glans protrudes from the hood and how much is covered by it, ranging from completely covered to fully exposed, and tissue of the labia minora also encircles the base of the glans. Size and length. There is no identified correlation between the size of the glans or clitoris as a whole, and a woman's age, height, weight, use of hormonal contraception, or being postmenopausal, although women who have given birth may have significantly larger clitoral measurements. Centimetre and millimetre measurements of the clitoris show variations in size. The clitoral glans has been cited as typically varying from 2 mm to 1 cm (less than an inch) and usually being estimated at 4 to 5 mm in both the transverse and longitudinal planes. A 1992 study concluded that the total clitoral length, including glans and body, is , where is the mean and is the standard deviation. Concerning other studies, researchers from the Elizabeth Garrett Anderson and Obstetric Hospital in London measured the labia and other genital structures of 50 women from the age of 18 to 50, with a mean age of 35.6., from 2003 to 2004, and the results given for the clitoral glans were 310 mm for the range and 5.5 [1.7] mm for the mean. Other research indicates that the clitoral body can measure in length, while the clitoral body and crura together can be or more in length. Development. The clitoris develops from a phallic outgrowth in the embryo called the genital tubercle. In the absence of testosterone, the genital tubercle allows for the formation of the clitoris; the initially rapid growth of the phallus gradually slows and the body and glans of the clitoris are formed along with its other structures. Function. Sexual stimulation and arousal. The clitoris has an abundance of nerve endings, and is the human female's most erogenous part of the body. When sexually stimulated, it may incite sexual arousal, which may result from mental stimulation (sexual fantasy), activity with a sexual partner, or masturbation, and can lead to orgasm. The most effective sexual stimulation of this organ is usually manually or orally, which is often referred to as direct clitoral stimulation; in cases involving sexual penetration, these activities may also be referred to as additional or assisted clitoral stimulation. Direct stimulation involves physical stimulation to the external anatomy of the clitorisglans, hood, and shaft. Stimulation of the labia minora, due to it being connected with the glans and hood, may have the same effect as direct clitoral stimulation. Though these areas may also receive indirect physical stimulation during sexual activity, such as when in friction with the labia majora, indirect clitoral stimulation is more commonly attributed to penile-vaginal penetration. Penile-anal penetration may also indirectly stimulate the clitoris by the shared sensory nerves (especially the pudendal nerve, which gives off the inferior anal nerves and divides into two terminal branches: the perineal nerve and the dorsal nerve of the clitoris). Due to the glans' high sensitivity, direct stimulation to it is not always pleasurable; instead, direct stimulation to the hood or near the glans is often more pleasurable, with the majority of women preferring to use the hood to stimulate the glans, or to have the glans rolled between the labia, for indirect touch. It is also common for women to enjoy the shaft being softly caressed in concert with the occasional circling of the glans. This might be with or without digital penetration of the vagina, while other women enjoy having the entire vulva caressed. As opposed to the use of dry fingers, stimulation from well-lubricated fingers, either by vaginal lubrication or a personal lubricant, is usually more pleasurable for the external clitoris. As the clitoris' external location does not allow for direct stimulation by penetration, any external clitoral stimulation while in the missionary position usually results from the pubic bone area. As such, some couples may engage in the woman-on-top position or the coital alignment technique, a sex position combining the "riding high" variation of the missionary position with pressure-counterpressure movements performed by each partner in rhythm with sexual penetration, to maximize clitoral stimulation. Same-sex female couples may engage in tribadism (vulva-to-vulva or vulva-to-body rubbing) for ample or mutual clitoral stimulation during whole-body contact. Pressing the penis in a gliding or circular motion against the clitoris or stimulating it by the movement against another body part may also be practiced. A vibrator (such as a clitoral vibrator), dildo or other sex toy may be used. Other women stimulate the clitoris by use of a pillow or other inanimate object, by a jet of water from the faucet of a bathtub or shower, or by closing their legs and rocking. During sexual arousal, the clitoris and the rest of the vulva engorge and change color as the erectile tissues fill with blood (vasocongestion), and the individual experiences vaginal contractions. The ischiocavernosus and bulbocavernosus muscles, which insert into the corpora cavernosa, contract and compress the dorsal vein of the clitoris (the only vein that drains the blood from the spaces in the corpora cavernosa), and the arterial blood continues a steady flow and having no way to drain out, fills the venous spaces until they become turgid and engorged with blood. This is what leads to clitoral erection. The prepuce has retracted and the glans becomes more visible. The glans doubles in diameter upon arousal and further stimulation becomes less visible as it is covered by the swelling of the clitoral hood. The swelling protects the glans from direct contact, as direct contact at this stage can be more irritating than pleasurable. Vasocongestion eventually triggers a muscular reflex, which expels the blood that was trapped in surrounding tissues, and leads to an orgasm. A short time after stimulation has stopped, especially if orgasm has been achieved, the glans becomes visible again and returns to its normal state, with a few seconds (usually 510) to return to its normal position and 510 minutes to return to its original size. If orgasm is not achieved, the clitoris may remain engorged for a few hours, which women often find uncomfortable. Additionally, the clitoris is very sensitive after orgasm, making further stimulation initially painful for some women. Clitoral and vaginal orgasmic factors. General statistics indicate that 7080 percent of women require direct clitoral stimulation (consistent manual, oral, or other concentrated friction against the external parts of the clitoris) to reach orgasm. Indirect clitoral stimulation (for example, by means of vaginal penetration) may also be sufficient for female orgasm. The area near the entrance of the vagina (the lower third) contains nearly 90 percent of the vaginal nerve endings, and there are areas in the anterior vaginal wall and between the top junction of the labia minora and the urinary meatus that are especially sensitive, but intense sexual pleasure, including orgasm, solely from vaginal stimulation is occasional or otherwise absent because the vagina has significantly fewer nerve endings than the clitoris. The prominent debate over the quantity of vaginal nerve endings began with Alfred Kinsey. Although Sigmund Freud's theory that clitoral orgasms are a prepubertal or adolescent phenomenon and that vaginal (or G-spot) orgasms are something that only physically mature females experience had been criticized before, Kinsey was the first researcher to harshly criticize the theory. Through his observations of female masturbation and interviews with thousands of women, Kinsey found that most of the women he observed and surveyed could not have vaginal orgasms, a finding that was also supported by his knowledge of sex organ anatomy. Scholar JaniceM. Irvine stated that he "criticized Freud and other theorists for projecting male constructs of sexuality onto women" and "viewed the clitoris as the main center of sexual response". He considered the vagina to be "relatively unimportant" for sexual satisfaction, relaying that "few women inserted fingers or objects into their vaginas when they masturbated". Believing that vaginal orgasms are "a physiological impossibility" because the vagina has insufficient nerve endings for sexual pleasure or climax, he "concluded that satisfaction from penile penetration [is] mainly psychological or perhaps the result of referred sensation". Masters and Johnson's research, as well as Shere Hite's, generally supported Kinsey's findings about the female orgasm. Masters and Johnson were the first researchers to determine that the clitoral structures surround and extend along and within the labia. They observed that both clitoral and vaginal orgasms have the same stages of physical response, and found that the majority of their subjects could only achieve clitoral orgasms, while a minority achieved vaginal orgasms. On that basis, they argued that clitoral stimulation is the source of both kinds of orgasms, reasoning that the clitoris is stimulated during penetration by friction against its hood. The research came at the time of the second-wave feminist movement, which inspired feminists to reject the distinction made between clitoral and vaginal orgasms. Feminist Anne Koedt argued that because men "have orgasms essentially by friction with the vagina" and not the clitoral area, this is why women's biology had not been properly analyzed. "Today, with extensive knowledge of anatomy, with [C. Lombard Kelly], Kinsey, and Masters and Johnson, to mention just a few sources, there is no ignorance on the subject [of the female orgasm]", she stated in her 1970 article "The Myth of the Vaginal Orgasm." She added, "There are, however, social reasons why this knowledge has not been popularized. We are living in a male society which has not sought change in women's role". Supporting an anatomical relationship between the clitoris and vagina is a study published in 2005, which investigated the size of the clitoris; Australian urologist Helen O'Connell, described as having initiated discourse among mainstream medical professionals to refocus on and redefine the clitoris, noted a direct relationship between the legs or roots of the clitoris and the erectile tissue of the bulbs and corpora, and the distal urethra and vagina while using magnetic resonance imaging (MRI) technology. While some studies, using ultrasound, have found physiological evidence of the G-spot in women who report having orgasms during vaginal intercourse, O'Connell argues that this interconnected relationship is the physiological explanation for the conjectured G-spot and experience of vaginal orgasms, taking into account the stimulation of the internal parts of the clitoris during vaginal penetration. "The vaginal wall is, in fact, the clitoris", she said. "If you lift the skin off the vagina on the side walls, you get the bulbs of the clitoristriangular, crescental masses of erectile tissue". O'Connell etal., having performed dissections on the vulvas of cadavers and used photography to map the structure of nerves in the clitoris, made the assertion in 1998 that there is more erectile tissue associated with the clitoris than is generally described in anatomical textbooks and were thus already aware that the clitoris is more than just its glans. They concluded that some females have more extensive clitoral tissues and nerves than others, especially having observed this in young cadavers compared to elderly ones, and therefore whereas the majority of females can only achieve orgasm by direct stimulation of the external parts of the clitoris, the stimulation of the more generalized tissues of the clitoris via vaginal intercourse may be sufficient for others. French researchers Odile Buisson and Pierre Foldès reported similar findings to that of O'Connell's. In 2008, they published the first complete3D sonography of the stimulated clitoris and republished it in 2009 with new research, demonstrating how erectile tissue of the clitoris engorges and surrounds the vagina. Based on their findings, they argued that women may be able to achieve vaginal orgasm through stimulation of the G-spot because the clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration. They assert that since the front wall of the vagina is inextricably linked with the internal parts of the clitoris, stimulating the vagina without activating the clitoris may be next to impossible. In their 2009 published study, it states the "coronal planes during perineal contraction and finger penetration demonstrated a close relationship between the root of the clitoris and the anterior vaginal wall". Buisson and Foldès suggested "that the special sensitivity of the lower anterior vaginal wall could be explained by pressure and movement of clitoris' root during a vaginal penetration and subsequent perineal contraction". Researcher Vincenzo Puppo, who, while agreeing that the clitoris is the center of female sexual pleasure and believing that there is no anatomical evidence of the vaginal orgasm, disagrees with O'Connell and other researchers' terminological and anatomical descriptions of the clitoris (such as referring to the vestibular bulbs as the "clitoral bulbs") and states that "the inner clitoris" does not exist because the penis cannot come in contact with the congregation of multiple nerves/veins situated until the angle of the clitoris, detailed by Georg Ludwig Kobelt, or with the root of the clitoris, which does not have sensory receptors or erogenous sensitivity, during vaginal intercourse. Puppo's belief contrasts the general belief among researchers that vaginal orgasms are the result of clitoral stimulation; they reaffirm that clitoral tissue extends, or is at least stimulated by its bulbs, even in the area most commonly reported to be the G-spot. The G-spot is analogous to the base of the penis and has additionally been theorized, with the sentiment from researcher Amichai Kilchevsky that because female fetal development is the "default" state in the absence of substantial exposure to male hormones and therefore the penis is essentially a clitoris enlarged by such hormones, there is no evolutionary reason why females would have an entity in addition to the clitoris that can produce orgasms. The general difficulty of achieving orgasms vaginally, which is a predicament that is likely due to nature easing the process of childbearing by drastically reducing the number of vaginal nerve endings, challenge arguments that vaginal orgasms help encourage sexual intercourse to facilitate reproduction. Supporting a distinct G-spot, however, is a study by Rutgers University, published in 2011, which was the first to map the female genitals onto the sensory portion of the brain; the scans indicated that the brain registered distinct feelings between stimulating the clitoris, the cervix and the vaginal wallwhere the G-spot is reported to bewhen several women stimulated themselves in a functional magnetic resonance machine. Barry Komisaruk, head of the research findings, stated that he feels that "the bulk of the evidence shows that the G-spot is not a particular thing" and that it is "a region, it's a convergence of many different structures". Vestigiality, adaptionist and reproductive views. Whether the clitoris is vestigial, an adaptation, or serves a reproductive function has been debated. Geoffrey Miller stated that Helen Fisher, Meredith Small and Sarah Blaffer Hrdy "have viewed the clitoral orgasm as a legitimate adaptation in its own right, with major implications for female sexual behavior and sexual evolution". Like Lynn Margulis and Natalie Angier, Miller believes, "The human clitoris shows no apparent signs of having evolved directly through male mate choice. It is not especially large, brightly colored, specifically shaped or selectively displayed during courtship". He contrasts this with other female species that have clitorises as long as their male counterparts. He said the human clitoris "could have evolved to be much more conspicuous if males had preferred sexual partners with larger brighter clitorises" and that "its inconspicuous design combined with its exquisite sensitivity suggests that the clitoris is important not as an object of male mate choice, but as a mechanism of female choice". While Miller stated that male scientists such as Stephen Jay Gould and Donald Symons "have viewed the female clitoral orgasm as an evolutionary side-effect of the male capacity for penile orgasm" and that they "suggested that clitoral orgasm cannot be an adaptation because it is too hard to achieve", Gould acknowledged that "most female orgasms emanate from a clitoral, rather than vaginal (or some other), site" and that his nonadaptive belief "has been widely misunderstood as a denial of either the adaptive value of female orgasm in general or even as a claim that female orgasms lack significance in some broader sense". He said that although he accepts that "clitoral orgasm plays a pleasurable and central role in female sexuality and its joys", "[a]ll these favorable attributes, however, emerge just as clearly and just as easily, whether the clitoral site of orgasm arose as a spandrel or an adaptation". He added that the "male biologists who fretted over [the adaptionist questions] simply assumed that a deeply vaginal site, nearer the region of fertilization, would offer greater selective benefit" due to their Darwinian, "summum bonum" beliefs about enhanced reproductive success. Similar to Gould's beliefs about adaptionist views and that "females grow nipples as adaptations for suckling, and males grow smaller unused nipples as a spandrel based upon the value of single development channels", American philosopher Elisabeth Lloyd suggested that there is little evidence to support an adaptionist account of female orgasm. Canadian sexologist Meredith L. Chivers stated that "Lloyd views female orgasm as an ontogenetic leftover; women have orgasms because the urogenital neurophysiology for orgasm is so strongly selected for in males that this developmental blueprint gets expressed in females without affecting fitness" and this is similar to "males hav[ing] nipples that serve no fitness-related function". At the 2002 conference for Canadian Society of Women in Philosophy, Nancy Tuana argued that the clitoris is unnecessary in reproduction; she stated that it has been ignored because of "a fear of pleasure. It is pleasure separated from reproduction. That's the fear". She reasoned that this fear causes ignorance, which veils female sexuality. O'Connell stated, "It boils down to rivalry between the sexes: the idea that one sex is sexual and the other reproductive. The truth is that both are sexual and both are reproductive". She reiterated that the vestibular bulbs appear to be part of the clitoris and that the distal urethra and vagina are intimately related structures, although they are not erectile in character, forming a tissue cluster with the clitoris that appears to be the location of female sexual function and orgasm. Clinical significance. Modification. Genital modification may be for aesthetic, medical or cultural reasons. This includes female genital mutilation (FGM), sex reassignment surgery (for trans men as part of transitioning), intersex surgery, and genital piercings. Use of anabolic steroids by bodybuilders and other athletes can result in significant enlargement of the clitoris along with other masculinizing effects on their bodies. Abnormal enlargement of the clitoris may be referred to as "clitoromegaly" or "macroclitoris", but clitoromegaly is more commonly seen as a congenital anomaly of the genitalia. Clitoroplasty, a sex reassignment surgery for trans women, involves the construction of a clitoris from penile tissue. People taking hormones or other medications as part of a gender transition usually experience dramatic clitoral growth; individual desires and the difficulties of phalloplasty (construction of a penis) often result in the retention of the original genitalia with the enlarged clitoris as a penis analog (metoidioplasty). However, the clitoris cannot reach the size of the penis through hormones. Asurgery to add function to the clitoris, such as metoidioplasty, is an alternative to phalloplasty that permits the retention of sexual sensation in the clitoris. In clitoridectomy, the clitoris may be removed as part of a radical vulvectomy to treat cancer such as vulvar intraepithelial neoplasia; however, modern treatments favor more conservative approaches, as invasive surgery can have psychosexual consequences. Clitoridectomy more often involves parts of the clitoris being partially or completely removed during FGM, which may be additionally known as female circumcision or female genital cutting (FGC). Removing the glans does not mean that the whole structure is lost, since the clitoris reaches deep into the genitals. In reduction clitoroplasty, a common intersex surgery, the glans is preserved and parts of the erectile bodies are excised. Problems with this technique include loss of sensation, loss of sexual function, and sloughing of the glans. One way to preserve the clitoris with its innervations and function is to imbricate and bury the glans; however, Şenaylı et al. state that "pain during stimulus because of trapped tissue under the scarring is nearly routine. In another method, 50 percent of the ventral clitoris is removed through the level base of the clitoral shaft, and it is reported that good sensation and clitoral function are observed in follow-up"; additionally, it has "been reported that the complications are from the same as those in the older procedures for this method". Concerning females who have the condition congenital adrenal hyperplasia, the largest group requiring surgical genital correction, researcher Atilla Şenaylı stated, "The main expectations for the operations are to create a normal female anatomy, with minimal complications and improvement of life quality". Şenaylı added that "[c]osmesis, structural integrity, the coital capacity of the vagina, and absence of pain during sexual activity are the parameters to be judged by the surgeon". (Cosmesis usually refers to the surgical correction of a disfiguring defect.) He stated that although "expectations can be standardized within these few parameters, operative techniques have not yet become homogeneous. Investigators have preferred different operations for different ages of patients". Gender assessment and surgical treatment are the two main steps in intersex operations. "The first treatments for clitoromegaly were simply resection of the clitoris. Later, it was understood that the clitoris glans and sensory input are important to facilitate orgasm", stated Atilla. The clitoral glans' epithelium "has high cutaneous sensitivity, which is important in sexual responses", and it is because of this that "recession clitoroplasty was later devised as an alternative, but reduction clitoroplasty is the method currently performed". What is often referred to as a "clitoris piercing" is the more common (and significantly less complicated) clitoral hood piercing. Since piercing the clitoris is difficult and very painful, piercing the clitoral hood is more common than piercing the clitoral shaft or glans, owing to the small percentage of people who are anatomically suited for it. Clitoral hood piercings are usually channeled in the form of vertical piercings, and, to a lesser extent, horizontal piercings. The triangle piercing is a very deep horizontal hood piercing and is done behind the clitoris as opposed to in front of it. For styles such as the Isabella piercing, which passes through the clitoral shaft but is placed deep at the base, they provide unique stimulation and still require the proper genital build. The Isabella starts between the clitoral glans and the urethra, exiting at the top of the clitoral hood; this piercing is highly risky concerning the damage that may occur because of intersecting nerves. (See Clitoral index.) Sexual disorders. Persistent genital arousal disorder (PGAD) results in spontaneous, persistent, and uncontrollable genital arousal in women, unrelated to any feelings of sexual desire. Clitoral priapism is a rare, potentially painful medical condition and is sometimes described as an aspect of PGAD. With PGAD, arousal lasts for an unusually extended period (ranging from hours to days); it can also be associated with morphometric and vascular modifications of the clitoris. Drugs may cause or affect clitoral priapism. The drug trazodone is known to cause male priapism as a side effect, but there is only one documented report that it may have caused clitoral priapism, in which case discontinuing the medication may be a remedy. Additionally, nefazodone is documented to have caused clitoral engorgement, as distinct from clitoral priapism, in one case, and clitoral priapism can sometimes start as a result of, or only after, the discontinuation of antipsychotics or selective serotonin reuptake inhibitors (SSRIs). Because PGAD is relatively rare and, as its concept apart from clitoral priapism, has only been researched since 2001, there is little research into what may cure or remedy the disorder. In some recorded cases, PGAD was caused by or caused, a pelvic arterial-venous malformation with arterial branches to the clitoris; surgical treatment was effective in these cases. In 2022, an article in "The New York Times" reported several instances of women experiencing reduced clitoral sensitivity or inability to orgasm following various surgical procedures, including biopsies of the vulva, pelvic mesh surgeries (sling surgeries), and labiaplasties. The Times quoted several researchers who suggest that surgeons' lack of training in clitoral anatomy and nerve distribution may have been a factor. As it is part of the vulva, the clitoris is susceptible to pain (clitorodynia) from various conditions such as sexually transmitted infections and pudendal nerve entrapment. The clitoris may also be affected by vulvar cancer, although at a much lower rate. Clitoral phimosis (or clitoral adhesions) is when the prepuce cannot be retracted, limiting exposure of the glans. Smegma. The secretion of smegma (smegma clitoridis) comes from the apocrine glands of the clitoris (sweat), the sebaceous glands of the clitoris (sebum) and desquamating epithelial cells. Society and culture. Ancient Greek–16th century knowledge and vernacular. Concerning historical and modern perceptions of the clitoris, the clitoris and the penis were considered equivalent by some scholars for more than 2,500 years in all respects except their arrangement. Due to it being frequently omitted from, or misrepresented in, historical and contemporary anatomical texts, it was also subject to a continual cycle of male scholars claiming to have discovered it. The ancient Greeks, ancient Romans, and Greek and Roman generations up to and throughout the Renaissance, were aware that male and female sex organs are anatomically similar, but prominent anatomists such as Galen and Vesalius regarded the vagina as the structural equivalent of the penis, except for being inverted; Vesalius argued against the existence of the clitoris in normal women, and his anatomical model described how the penis corresponds with the vagina, without a role for the clitoris. Ancient Greek and Roman sexuality additionally designated penetration as "male-defined" sexuality. The term "tribas", or , was used to refer to a woman or intersex individual who actively penetrated another person (male or female) through the use of the clitoris or a dildo. As any sexual act was believed to require that one of the partners be "phallic" and that therefore sexual activity between women was impossible without this feature, mythology popularly associated lesbians with either having enlarged clitorises or as incapable of enjoying sexual activity without the substitution of a phallus. In 1545, Charles Estienne was the first writer to identify the clitoris in a work based on dissection, but he concluded that it had a urinary function. Following this study, Realdo Colombo (also known as Renaldus Columbus), a lecturer in surgery at the University of Padua, Italy, published a book called "Dere anatomica" in 1559, in which he describes the "seat of woman's delight". In his role as researcher, Colombo concluded, "Since no one has discerned these projections and their workings, if it is permissible to give names to things discovered by me, it should be called the love or sweetness of Venus.", about the mythological Venus, goddess of erotic love. Colombo's claim was disputed by his successor at Padua, Gabriele Falloppio (discoverer of the fallopian tube), who claimed that he was the first to discover the clitoris. In 1561, Falloppio stated, "Modern anatomists have entirely neglected it ... and do not say a word about it ... and if others have spoken of it, know that they have taken it from me or my students". This caused an upset in the European medical community, and, having read Colombo's and Falloppio's detailed descriptions of the clitoris, Vesalius stated, "It is unreasonable to blame others for incompetence on the basis of some sport of nature you have observed in some women and you can hardly ascribe this new and useless part, as if it were an organ, to healthy women". He concluded, "I think that such a structure appears in hermaphrodites who otherwise have well-formed genitals, as Paul of Aegina describes, but I have never once seen in any woman a penis (which Avicenna called albaratha and the Greeks called an enlarged nympha and classed as an illness) or even the rudiments of a tiny phallus". The average anatomist had difficulty challenging Galen's or Vesalius' research; Galen was the most famous physician of the Greek era and his works were considered the standard of medical understanding up to and throughout the Renaissance (i.e. for almost two thousand years), and various terms being used to describe the clitoris seemed to have further confused the issue of its structure. In addition to Avicenna's naming it the "albaratha" or "virga" ("rod") and Colombo's calling it the sweetness of Venus, Hippocrates used the term "columella" ("little pillar"), and Albucasis, an Arabic medical authority, named it "tentigo" ("tension"). The names indicated that each description of the structures was about the body and glans of the clitoris but usually the glans. It was additionally known to the Romans, who named it (vulgar slang) "landica". However, Albertus Magnus, one of the most prolific writers of the Middle Ages, felt that it was important to highlight "homologies between male and female structures and function" by adding "a psychology of sexual arousal" that Aristotle had not used to detail the clitoris. While in Constantine's treatise "Liber de Coitu", the clitoris is referred to a few times, Magnus gave an equal amount of attention to male and female organs. Like Avicenna, Magnus also used the word "virga" for the clitoris, but employed it for the male and female genitals; despite his efforts to give equal ground to the clitoris, the cycle of suppression and rediscovery of the organ continued, and a 16th-century justification for clitoridectomy appears to have been confused with intersex conditions and the imprecision created by the word "nymphae" substituted for the word "clitoris". Nymphotomy was a medical operation to excise an unusually large clitoris, but what was considered "unusually large" was often a matter of perception. The procedure was routinely performed on Egyptian women, due to physicians such as Jacques Daléchamps who believed that this version of the clitoris was "an unusual feature that occurred in almost all Egyptian women [and] some of ours, so that when they find themselves in the company of other women, or their clothes rub them while they walk or their husbands wish to approach them, it erects like a male penis and indeed they use it to play with other women, as their husbands would do ... Thus the parts are cut". 17th century–present day knowledge and vernacular. Caspar Bartholin (whom Bartholin's glands are named after), a 17th-century Danish anatomist, dismissed Colombo's and Falloppio's claims that they discovered the clitoris, arguing that the clitoris had been widely known to medical science since the second century. Although 17th-century midwives recommended to men and women that women should aspire to achieve orgasms to help them get pregnant for general health and well-being and to keep their relationships healthy, debate about the importance of the clitoris persisted, notably in the work of Regnier de Graaf in the 17th century and Georg Ludwig Kobelt in the 19th. Like Falloppio and Bartholin, de Graaf criticized Colombo's claim of having discovered the clitoris; his work appears to have provided the first comprehensive account of clitoral anatomy. "We are extremely surprised that some anatomists make no more mention of this part than if it did not exist at all in the universe of nature", he stated. "In every cadaver, we have so far dissected we have found it quite perceptible to sight and touch". De Graaf stressed the need to distinguish from , choosing to "always give [the clitoris] the name clitoris" to avoid confusion; this resulted in the frequent use of the correct name for the organ among anatomists, but considering that was also varied in its use and eventually became the term specific to the labia minora, more confusion ensued. Debate about whether orgasm was even necessary for women began in the Victorian era, and Freud's 1905 theory about the immaturity of clitoral orgasms (see above) negatively affected women's sexuality throughout most of the 20th century. Toward the end of World War I, a maverick BritishMP named Noel Pemberton Billing published an article entitled "The Cult of the Clitoris", furthering his conspiracy theories and attacking the actress Maud Allan and Margot Asquith, wife of the prime minister. The accusations led to a sensational libel trial, which Billing eventually won; Philip Hoare reports that Billing argued that "as a medical term, 'clitoris' would only be known to the 'initiated', and was incapable of corrupting moral minds". Jodie Medd argues regarding "The Cult of the Clitoris" that "the female non-reproductive but desiring body [...] simultaneously demands and refuses interpretative attention, inciting scandal through its very resistance to representation". From the 18th to the 20th century, especially during the 20th, details of the clitoris from various genital diagrams presented in earlier centuries were omitted from later texts. The full extent of the clitoris was alluded to by Masters and Johnson in 1966, but in such a muddled fashion that the significance of their description became obscured; in 1981, the Federation of Feminist Women's Health Clinics (FFWHC) continued this process with anatomically precise illustrations identifying 18 structures of the clitoris. Despite the FFWHC's illustrations, Josephine Lowndes Sevely, in 1987, described the vagina as more of the counterpart of the penis. Concerning other beliefs about the clitoris, Hite (1976 and 1981) found that, during sexual intimacy with a partner, clitoral stimulation was more often described by women as foreplay than as a primary method of sexual activity, including orgasm. Further, although the FFWHC's work significantly propelled feminist reformation of anatomical texts, it did not have a general impact. Helen O'Connell's late 1990s research motivated the medical community to start changing the way the clitoris is anatomically defined. O'Connell describes typical textbook descriptions of the clitoris as lacking detail and including inaccuracies, such as older and modern anatomical descriptions of the female human urethral and genital anatomy having been based on dissections performed on elderly cadavers whose erectile (clitoral) tissue had shrunk. She instead credits the work of Georg Ludwig Kobelt as the most comprehensive and accurate description of clitoral anatomy. MRI measurements, which provide a live and multi-planar method of examination, now complement the FFWHC's, as well as O'Connell's, research efforts concerning the clitoris, showing that the volume of clitoral erectile tissue is ten times that which is shown in doctors' offices and anatomy textbooks. In Bruce Bagemihl's survey of "The Zoological Record" (1978–1997)which contains over a million documents from over 6,000 scientific journals539 articles focusing on the penis were found, while seven were found focusing on the clitoris. In 2000, researchers Shirley Ogletree and Harvey Ginsberg concluded that there is a general neglect of the word in the common vernacular. They looked at the terms used to describe genitalia in the PsycINFO database from 1887 to 2000 and found that was used in 1,482 sources, in 409, while was only mentioned in 83. They additionally analyzed 57 books listed in a computer database for sex instruction. In the majority of the books, was the most commonly discussed body partmentioned more than , , and put together. They last investigated terminology used by college students, ranging from Euro-American (76%/76%), Hispanic (18%/14%), and African American (4%/7%), regarding the students' beliefs about sexuality and knowledge on the subject. The students were overwhelmingly educated to believe that the vagina is the female counterpart of the penis. The authors found that the student's belief that the inner portion of the vagina is the most sexually sensitive part of the female body correlated with negative attitudes toward masturbation and strong support for sexual myths. A study in 2005 reported that, among a sample of undergraduate students, the most frequently cited sources for knowledge about the clitoris were school and friends, and that this was associated with the least tested knowledge. Knowledge of the clitoris by self-exploration was the least cited, but "respondents correctly answered, on average, three of the five clitoral knowledge measures". The authors stated that "[k]nowledge correlated significantly with the frequency of women's orgasm in masturbation but not partnered sex" and that their "results are discussed in light of gender inequality and a social construction of sexuality, endorsed by both men and women, that privileges men's sexual pleasure over women's, such that orgasm for women is pleasing but ultimately incidental". They concluded that part of the solution to remedying "this problem" requires that males and females are taught more about the clitoris than is currently practiced. The humanitarian group Clitoraid launched the first annual International Clitoris Awareness Week, from 6to12 May in 2015. Clitoraid spokesperson Nadine Gary stated that the group's mission is to raise public awareness about the clitoris because it has "been ignored, vilified, made taboo, and considered sinful and shameful for centuries". (See also Vulva activism) Odile Fillod created a 3D printable, open source, full-size model of the clitoris, for use in a set of anti-sexist videos she had been commissioned to produce. Fillod was interviewed by Stephanie Theobald, whose article in "The Guardian" stated that the 3Dmodel would be used for sex education in French schools, from primary to secondary level, from September 2016 onwards; this was not the case, but the story went viral across the world. A questionnaire in a 2019 study was administered to a sample of educational sciences postgraduate students to trace the level of their knowledge concerning the organs of the female and male reproductive system. The authors reported that about two-thirds of the students failed to name parts of the vulva, such as the clitoris and labia, even after detailed pictures were provided to them. An analysis in 2022 reported that the clitoris is mentioned in only one out of 113 Greek secondary education textbooks used in biology classes from the 1870s to present. Contemporary art. New York artist Sophia Wallace started work in 2012 on a multimedia project to challenge misconceptions about the clitoris. Based on O'Connell's 1998 research, Wallace's work emphasizes the sheer scope and size of the human clitoris. She says that ignorance of this still seems to be pervasive in modern society. "It is a curious dilemma to observe the paradox that on the one hand, the female body is the primary metaphor for sexuality, its use saturates advertising, art, and the mainstream erotic imaginary", she said. "Yet, the clitoris, the true female sexual organ, is virtually invisible". The project is called and it includes a "clit rodeo", which is interactive, climb-on model of a giant golden clitoris, including its inner parts, produced with the help of sculptor Kenneth Thomas. "It's been a showstopper wherever it's been shown. People are hungry to be able to talk about this", Wallace said. "I love seeing men standing up for the clit [...] Cliteracy is about not having one's body controlled or legislated [...] Not having access to the pleasure that is your birthright is a deeply political act". Another project started in New York, in 2016, street art that has since spread to almost 100 cities: Clitorosity, a "community-driven effort to celebrate the full structure of the clitoris", combining chalk drawings and words to spark interaction and conversation with passers-by, which the team documents on social media. In 2016, Lori-Malépart Traversy made an animated documentary about the unrecognized anatomy of the clitoris. Alli Sebastian Wolf created a golden scale model of the clitoris in 2017, called the "Glitoris" and said, she hopes knowledge of the clitoris will soon become so uncontroversial that making art about them would be as irrelevant as making art about penises. Other projects listed by the BBC include Clito Clito, body-positive jewellery made in Berlin; "Clitorissima", a documentary intended to normalize mother-daughter conversations about the clitoris; and a ClitArt festival in London, encompassing spoken word performances as well as visual art. French art collective Les Infemmes (a blend word of "infamous" and "women") published a fanzine whose title can be translated as "The Clit Cheatsheet". Influence on female genital mutilation. Significant controversy surrounds female genital mutilation (FGM), with the World Health Organization (WHO) being one of many health organizations that have campaigned against the procedures on behalf of human rights, stating that "FGM has no health benefits" and that it is "a violation of the human rights of girls and women" which "reflects deep-rooted inequality between the sexes". The practice has existed at one point or another in almost all human civilizations, most commonly to exert control over the sexual behavior, including masturbation, of girls and women, but also to change the clitoris' appearance. Custom and tradition are the most frequently cited reasons for FGM, with some cultures believing that not performing it has the possibility of disrupting the cohesiveness of their social and political systems, such as FGM also being a part of a girl's initiation into adulthood. Often, a girl is not considered an adult in an FGM-practicing society unless she has undergone FGM, and the "removal of the clitoris and labiaviewed by some as the of a woman's bodyis thought to enhance the girl's femininity, often synonymous with docility and obedience". Female genital mutilation is carried out in several societies, especially in Africa, with85 percent of genital mutilations performed in Africa consisting of clitoridectomy or excision, and to a lesser extent in other parts of the Middle East and Southeast Asia, on girls from a few days old to mid-adolescent, often to reduce the sexual desire to preserve vaginal virginity. The practice of FGM has spread globally, as immigrants from Asia, Africa, and the Middle East bring the custom with them. In the United States, it is sometimes practiced on girls born with a clitoris that is larger than usual. Comfort Momoh, who specializes in the topic of FGM, states that FGM might have been "practiced in ancient Egypt as a sign of distinction among the aristocracy"; there are reports that traces of infibulation are on Egyptian mummies. FGM is still routinely practiced in Egypt. Greenberg etal. report that "one study found that97 percent of married women in Egypt had had some form of genital mutilation performed". Amnesty International estimated in 1997 that more than two million FGM procedures are performed every year. Other animals. Although the clitoris (and clitoral prepuce/sheath) exists in all mammal species, there are few detailed studies of the anatomy of the clitoris in non-humans. Studies have been done on the clitoris of cats, sheep and mice. Some mammals have clitoral glands. The clitoris is especially developed in fossas, non-human apes, lemurs, moles, and often contains a small bone known as the os clitoridis. Many species of talpid moles exhibit peniform clitorises that are tunneled by the urethra and are found to have erectile tissue. The clitoris is contained in fossa, which is a small pouch of tissue in horses and dogs. The clitoris is found in other amniotic creatures including reptiles such as turtles and crocodilians, and birds such as ratites (e.g., cassowaries, ostriches) and anatids (e.g., swans, ducks). The hemiclitoris is one-half of a paired structure in squamates (lizards and snakes). Some intersex female bears mate and give birth through the tip of the clitoris; these species are grizzly bears, brown bears, American black bears and polar bears. Although the bears have been described as having "a birth canal that runs through the clitoris rather than forming a separate vagina" (a feature that is estimated to make up 10 to 20 percent of the bears' population), scientists state that female spotted hyenas are the only non-intersex female mammals devoid of an external vaginal opening, and whose sexual anatomy is distinct from usual intersex cases. Non-human primates. In spider monkeys, the clitoris is especially developed and has an interior passage, or urethra, that makes it almost identical to the penis, and it retains and distributes urine droplets as the female spider monkey moves around. Scholar Alan F. Dixson stated that this urine "is voided at the bases of the clitoris, flows down the shallow groove on its perineal surface, and is held by the skin folds on each side of the groove". Because spider monkeys of South America have pendulous and erectile clitorises long enough to be mistaken for a penis, researchers and observers of the species look for a scrotum to determine the animal's sex; a similar approach is to identify scent-marking glands that may also be present on the clitoris. The clitoris erects in squirrel monkeys during dominance displays, which indirectly influences the squirrel monkeys' reproductive success. The clitoris of bonobos is larger and more externalized than in most mammals; Natalie Angier said that a young adolescent "female bonobo is maybe half the weight of a human teenager, but her clitoris is three times bigger than the human equivalent, and visible enough to waggle unmistakably as she walks". Female bonobos often engage in the practice of genital-genital (GG) rubbing. Ethologist Jonathan Balcombe stated that female bonobos rub their clitorises together rapidly for ten to twenty seconds, and this behavior, "which may be repeated in rapid succession, is usually accompanied by grinding, shrieking, and clitoral engorgement"; he added that, on average, they engage in this practice "about once every two hours", and as bonobos sometimes mate face-to-face, "evolutionary biologist Marlene Zuk has suggested that the position of the clitoris in bonobos and some other primates has evolved to maximize stimulation during sexual intercourse". Many strepsirrhine species exhibit elongated clitorises that are either fully or partially tunneled by the urethra, including mouse lemurs, dwarf lemurs, all "Eulemur" species, lorises and galagos. Some of these species also exhibit a membrane seal across the vagina that closes the vaginal opening during the non-mating seasons, most notably mouse and dwarf lemurs. The clitoral morphology of the ring-tailed lemur is the most well-studied. They are described as having "elongated, pendulous clitorises that are [fully] tunneled by a urethra". The urethra is surrounded by erectile tissue, which allows for significant swelling during breeding seasons, but this erectile tissue differs from the typical male corpus spongiosum. Non-pregnant adult ring-tailed females do not show higher testosterone levels than males, but they do exhibit higher A4 and estrogen levels during seasonal aggression. During pregnancy, estrogen, A4, and testosterone levels are raised, but female fetuses are still "protected" from excess testosterone. These "masculinized" genitalia are often found alongside other traits, such as female-dominated social groups, reduced sexual dimorphism that makes females the same size as males, and even ratios of sexes in adult populations. This phenomenon that has been dubbed the "lemur syndrome". A 2014 study of "Eulemur" masculinization proposed that behavioral and morphological masculinization in female Lemuriformes is an ancestral trait that likely emerged after their split from Lorisiformes. Spotted hyenas. While female spotted hyenas were sometimes referred to as pseudohermaphrodites and scientists of ancient and later historical times believed that they were hermaphrodites, modern scientists do not refer to them as such. That designation is typically reserved for those who simultaneously exhibit features of both sexes; the genetic makeup of female spotted hyenas "are clearly distinct" from male spotted hyenas. Female spotted hyenas have a clitoris 90 percent as long and the same diameter as a male penis (171 millimetres long and 22 millimetres in diameter), and this pseudo-penis' formation seems largely androgen-independent because it appears in the female fetus before differentiation of the fetal ovary and adrenal gland. The spotted hyenas have a highly erectile clitoris, complete with a false scrotum; author John C. Wingfield stated that "the resemblance to male genitalia is so close that sex can be determined with confidence only by palpation of the scrotum". The pseudo-penis can also be distinguished from the males' genitalia by its greater thickness and more rounded glans. The female possesses no external vagina, as the labia are fused to form a pseudo-scrotum. In the females, this scrotum consists of soft adipose tissue. Like male spotted hyenas with regard to their penises, the female spotted hyenas have small spines on the head of their clitorises, which scholar said makes "the clitoris tip feel like soft sandpaper". She added that the clitoris "extends away from the body in a sleek and slender arc, measuring, on average, over 17 cm from root to tip. Just like a penis, [it] is fully erectile, raising its head in hyena greeting ceremonies, social displays, games of rough and tumble or when sniffing out peers". Due to their higher levels of androgen exposure during fetal development, the female hyenas are significantly more muscular and aggressive than their male counterparts; social-wise, they are of higher rank than the males, being dominant or dominant and alpha, and the females who have been exposed to higher levels of androgen than average become higher-ranking than their female peers. Subordinate females lick the clitorises of higher-ranked females as a sign of submission and obedience, but females also lick each other's clitorises as a greeting or to strengthen social bonds; in contrast, while all males lick the clitorises of dominant females, the females will not lick the penises of males because males are considered to be of lowest rank. The female spotted hyenas urinate, copulate and give birth through the clitoris since the urethra and vagina exit through the clitoral glans. This trait makes mating more laborious for the male than in other mammals, and also makes attempts to sexually coerce (physically force sexual activity on) females futile. Joan Roughgarden, an ecologist and evolutionary biologist, said that because the hyena's clitoris is higher on the belly than the vagina in most mammals, the male hyena "must slide his rear under the female when mating so that his penis lines up with [her clitoris]". In an action similar to pushing up a shirtsleeve, the "female retracts the [pseudo-penis] on itself, and creates an opening into which the male inserts his own penis". The male must practice this act, which can take a couple of months to successfully perform. Female spotted hyenas exposed to larger doses of androgen have significantly damaged ovaries, making it difficult to conceive. After giving birth, the pseudo-penis is stretched and loses much of its original aspects; it becomes a slack-walled and reduced prepuce with an enlarged orifice with split lips. Approximately 15% of the females die during their first time giving birth, and over 60% of their species' firstborn young die. A 2006 Baskin et al. study concluded, "The basic anatomical structures of the corporeal bodies in both sexes of humans and spotted hyenas were similar. As in humans, the dorsal nerve distribution was unique in being devoid of nerves at the 12 o'clock position in the penis and clitoris of the spotted hyena" and that "[d]orsal nerves of the penis/clitoris in humans and male spotted hyenas tracked along both sides of the corporeal body to the corpus spongiosum at the 5 and 7 o'clock positions. The dorsal nerves penetrated the corporeal body and distally the glans in the hyena", and in female hyenas, "the dorsal nerves fanned out laterally on the clitoral body. Glans morphology was different in appearance in both sexes, being wide and blunt in the female and tapered in the male".
6886
34655058
https://en.wikipedia.org/wiki?curid=6886
Chicago
Chicago is the most populous city in the U.S. state of Illinois and in the Midwestern United States. Located on the western shore of Lake Michigan, it is the third-most populous city in the United States with a population of 2.74 million at the 2020 census, while the Chicago metropolitan area has 9.41 million residents and is the third-largest metropolitan area in the nation. Chicago is the seat of Cook County, the second-most populous county in the United States. Chicago was incorporated as a city in 1837 near a portage between the Great Lakes and the Mississippi River watershed. It grew rapidly in the mid-19th century. In 1871, the Great Chicago Fire destroyed several square miles and left more than 100,000 homeless, but Chicago's population continued to grow. Chicago made noted contributions to urban planning and architecture, such as the Chicago School, the development of the City Beautiful movement, and the steel-framed skyscraper. Chicago is an international hub for finance, culture, commerce, industry, education, technology, telecommunications, and transportation. It has the largest and most diverse finance derivatives market in the world, generating 20% of all volume in commodities and financial futures alone. O'Hare International Airport is routinely ranked among the world's top ten busiest airports by passenger traffic, and the region is also the nation's railroad hub. The Chicago area has one of the highest gross domestic products (GDP) of any urban region in the world, generating $689 billion in 2018. Chicago's economy is diverse, with no single industry employing more than 14% of the workforce. Chicago is a major destination for tourism, with 55 million visitors in 2024 to its cultural institutions, Lake Michigan beaches, restaurants, and more. Chicago's culture has contributed much to the visual arts, literature, film, theater, comedy (especially improvisational comedy), food, dance, and music (particularly jazz, blues, soul, hip-hop, gospel, and electronic dance music, including house music). Chicago is home to the Chicago Symphony Orchestra and the Lyric Opera of Chicago, while the Art Institute of Chicago provides an influential visual arts museum and art school. The Chicago area also hosts the University of Chicago, Northwestern University, and the University of Illinois Chicago, among other institutions of learning. Professional sports in Chicago include all major professional leagues, including two Major League Baseball teams. The city also hosts the Chicago Marathon, one of the World Marathon Majors. Etymology and nicknames. The name "Chicago" is derived from a French rendering of the indigenous Miami–Illinois name , the locative form of the word which can mean both "skunk" and "ramps," a wild relative of onion and garlic known to botanists as "Allium tricoccum". The first known reference to the site of the city of Chicago as "" was by Robert de LaSalle around 1679 in a memoir. Henri Joutel, in his journal of 1688, noted that the eponymous wild "garlic" grew profusely in the area. According to his diary of late September 1687: The city has had several nicknames throughout its history, such as the Windy City, Chi-Town, Second City, and City of the Big Shoulders. History. Beginnings. In the mid-18th century, the area was inhabited by the Potawatomi, an indigenous tribe who had succeeded the Miami, Sauk and Meskwaki peoples in this region. The first known permanent settler in Chicago was a trader, Jean Baptiste Point du Sable. Du Sable was of African descent, perhaps born in the French colony of Saint-Domingue (Haiti), and he established the settlement in the 1780s. He is commonly known as the "Founder of Chicago." In 1795, following the victory of the new United States in the Northwest Indian War, an area that was to be part of Chicago was turned over to the U.S. for a military post by native tribes in accordance with the Treaty of Greenville. In 1803, the U.S. Army constructed Fort Dearborn, which was destroyed during the War of 1812 in the Battle of Fort Dearborn by the Potawatomi before being later rebuilt. After the War of 1812, the Ottawa, Ojibwe, and Potawatomi tribes ceded additional land to the United States in the 1816 Treaty of St. Louis. The Potawatomi were forcibly removed from their land after the 1833 Treaty of Chicago and sent west of the Mississippi River as part of the federal policy of Indian removal. 19th century. On August 12, 1833, the Town of Chicago was organized with a population of about 200. Within seven years it grew to more than 6,000 people. On June 15, 1835, the first public land sales began with Edmund Dick Taylor as Receiver of Public Monies. The City of Chicago was incorporated on Saturday, March 4, 1837, and for several decades was the world's fastest-growing city. As the site of the Chicago Portage, the city became an important transportation hub between the eastern and western United States. Chicago's first railway, Galena and Chicago Union Railroad, and the Illinois and Michigan Canal opened in 1848. The canal allowed steamboats and sailing ships on the Great Lakes to connect to the Mississippi River. A flourishing economy brought residents from rural communities and immigrants from abroad. Manufacturing and retail and finance sectors became dominant, influencing the American economy. The Chicago Board of Trade (established 1848) listed the first-ever standardized "exchange-traded" forward contracts, which were called futures contracts. In the 1850s, Chicago gained national political prominence as the home of Senator Stephen Douglas, the champion of the Kansas–Nebraska Act and the "popular sovereignty" approach to the issue of the spread of slavery. These issues also helped propel another Illinoisan, Abraham Lincoln, to the national stage. Lincoln was nominated in Chicago for U.S. president at the 1860 Republican National Convention, which was held in a purpose-built auditorium called the Wigwam. He defeated Douglas in the general election, and this set the stage for the American Civil War. To accommodate rapid population growth and demand for better sanitation, the city improved its infrastructure. In February 1856, Chicago's Common Council approved Chesbrough's plan to build the United States' first comprehensive sewerage system. The project raised much of central Chicago to a new grade with the use of jackscrews for raising buildings. While elevating Chicago, and at first improving the city's health, the untreated sewage and industrial waste now flowed into the Chicago River, and subsequently into Lake Michigan, polluting the city's primary freshwater source. The city responded by tunneling out into Lake Michigan to newly built water cribs. In 1900, the problem of sewage contamination was largely resolved when the city completed a major engineering feat. It reversed the flow of the Chicago River so that the water flowed away from Lake Michigan rather than into it. This project began with the construction and improvement of the Illinois and Michigan Canal, and was completed with the Chicago Sanitary and Ship Canal that connects to the Illinois River, which flows into the Mississippi River. In 1871, the Great Chicago Fire destroyed an area about long and wide, a large section of the city at the time. Much of the city, including railroads and stockyards, survived intact, and from the ruins of the previous wooden structures arose more modern constructions of steel and stone. These set a precedent for worldwide construction. During its rebuilding period, Chicago constructed the world's first skyscraper in 1885, using steel-skeleton construction. The city grew significantly in size and population by incorporating many neighboring townships between 1851 and 1920, with the largest annexation happening in 1889, with five townships joining the city, including the Hyde Park Township, which now comprises most of the South Side of Chicago and the far southeast of Chicago, and the Jefferson Township, which now makes up most of Chicago's Northwest Side. The desire to join the city was driven by municipal services that the city could provide its residents. Chicago's flourishing economy attracted huge numbers of new immigrants from Europe and migrants from the Eastern United States. Of the total population in 1900, more than 77% were either foreign-born or born in the United States of foreign parentage. Germans, Irish, Poles, Swedes, and Czechs made up nearly two-thirds of the foreign-born population (by 1900, whites were 98.1% of the city's population). Labor conflicts followed the industrial boom and the rapid expansion of the labor pool, including the Haymarket affair on May 4, 1886, and in 1894 the Pullman Strike. Anarchist and socialist groups played prominent roles in creating very large and highly organized labor actions. Concern for social problems among Chicago's immigrant poor led Jane Addams and Ellen Gates Starr to found Hull House in 1889. Programs that were developed there became a model for the new field of social work. During the 1870s and 1880s, Chicago attained national stature as the leader in the movement to improve public health. City laws and later, state laws that upgraded standards for the medical profession and fought urban epidemics of cholera, smallpox, and yellow fever were both passed and enforced. These laws became templates for public health reform in other cities and states. The city established many large, well-landscaped municipal parks, which also included public sanitation facilities. The chief advocate for improving public health in Chicago was John H. Rauch, M.D. Rauch established a plan for Chicago's park system in 1866. He created Lincoln Park by closing a cemetery filled with shallow graves, and in 1867, in response to an outbreak of cholera he helped establish a new Chicago Board of Health. Ten years later, he became the secretary and then the president of the first Illinois State Board of Health, which carried out most of its activities in Chicago. In the 1800s, Chicago became the nation's railroad hub, and by 1910 over 20 railroads operated passenger service out of six different downtown terminals. In 1883, Chicago's railway managers needed a general time convention, so they developed the standardized system of North American time zones. This system for telling time spread throughout the continent. In 1893, Chicago hosted the World's Columbian Exposition on former marshland at the present location of Jackson Park. The Exposition drew 27.5 million visitors, and is considered the most influential world's fair in history. The University of Chicago, formerly at another location, moved to the same South Side location in 1892. The term "midway" for a fair or carnival referred originally to the Midway Plaisance, a strip of park land that still runs through the University of Chicago campus and connects the Washington and Jackson Parks. 20th and 21st centuries. 1900 to 1939. During World War I and the 1920s there was a major expansion in industry. The availability of jobs attracted African Americans from the Southern United States. Between 1910 and 1930, the African American population of Chicago increased dramatically, from 44,103 to 233,903. This Great Migration had an immense cultural impact, called the Chicago Black Renaissance, part of the New Negro Movement, in art, literature, and music. Continuing racial tensions and violence, such as the Chicago race riot of 1919, also occurred. The ratification of the 18th amendment to the Constitution in 1919 made the production and sale (including exportation) of alcoholic beverages illegal in the United States. This ushered in the beginning of what is known as the gangster era, a time that roughly spans from 1919 until 1933 when Prohibition was repealed. The 1920s saw gangsters, including Al Capone, Dion O'Banion, Bugs Moran and Tony Accardo battle law enforcement and each other on the streets of Chicago during the Prohibition era. Chicago was the location of the infamous St. Valentine's Day Massacre in 1929, when Al Capone sent men to gun down members of a rival gang, North Side, led by Bugs Moran. From 1920 to 1921, the city was affected by a series of tenant rent strikes, which lead to the formation of the Chicago Tenants Protective association, passage of the Kessenger tenant laws, and of a heat ordinance that legally required flats to be kept above 68 °F during winter months by landlords. Chicago was the first American city to have a homosexual-rights organization. The organization, formed in 1924, was called the Society for Human Rights. It produced the first American publication for homosexuals, "Friendship and Freedom". Police and political pressure caused the organization to disband. The Great Depression brought unprecedented suffering to Chicago, in no small part due to the city's heavy reliance on heavy industry. Notably, industrial areas on the south side and neighborhoods lining both branches of the Chicago River were devastated; by 1933 over 50% of industrial jobs in the city had been lost, and unemployment rates amongst blacks and Mexicans in the city were over 40%. The Republican political machine in Chicago was utterly destroyed by the economic crisis, and every mayor since 1931 has been a Democrat. From 1928 to 1933, the city witnessed a tax revolt, and the city was unable to meet payroll or provide relief efforts. The fiscal crisis was resolved by 1933, and at the same time, federal relief funding began to flow into Chicago. Chicago was also a hotbed of labor activism, with Unemployed Councils contributing heavily in the early depression to create solidarity for the poor and demand relief; these organizations were created by socialist and communist groups. By 1935 the Workers Alliance of America began organizing the poor, workers, the unemployed. In the spring of 1937 Republic Steel Works witnessed the Memorial Day massacre of 1937 in the neighborhood of East Side. In 1933, Chicago Mayor Anton Cermak was fatally wounded in Miami, Florida, during a failed assassination attempt on President-elect Franklin D. Roosevelt. In 1933 and 1934, the city celebrated its centennial by hosting the Century of Progress International Exposition World's Fair. The theme of the fair was technological innovation over the century since Chicago's founding. 1940 to 1979. During World War II, the city of Chicago alone produced more steel than the United Kingdom every year from 1939 – 1945, and more than Nazi Germany from 1943 – 1945. The Great Migration, which had been on pause due to the Depression, resumed at an even faster pace in the second wave, as hundreds of thousands of blacks from the South arrived in the city to work in the steel mills, railroads, and shipping yards. On December 2, 1942, physicist Enrico Fermi conducted the world's first controlled nuclear reaction at the University of Chicago as part of the top-secret Manhattan Project. This led to the creation of the atomic bomb by the United States, which it used in World War II in 1945. Mayor Richard J. Daley, a Democrat, was elected in 1955, in the era of machine politics. In 1956, the city conducted its last major expansion when it annexed the land under O'Hare airport, including a small portion of DuPage County. By the 1960s, white residents in several neighborhoods left the city for the suburban areas – in many American cities, a process known as white flight – as Blacks continued to move beyond the Black Belt. While home loan discriminatory redlining against blacks continued, the real estate industry practiced what became known as blockbusting, completely changing the racial composition of whole neighborhoods. Structural changes in industry, such as globalization and job outsourcing, caused heavy job losses for lower-skilled workers. At its peak during the 1960s, some 250,000 workers were employed in the steel industry in Chicago, but the steel crisis of the 1970s and 1980s reduced this number to just 28,000 in 2015. In 1966, Martin Luther King Jr. and Albert Raby led the Chicago Freedom Movement, which culminated in agreements between Mayor Richard J. Daley and the movement leaders. Two years later, the city hosted the tumultuous 1968 Democratic National Convention, which featured physical confrontations both inside and outside the convention hall, with anti-war protesters, journalists and bystanders being beaten by police. Major construction projects, including the Sears Tower (now known as the Willis Tower, which in 1974 became the world's tallest building), University of Illinois at Chicago, McCormick Place, and O'Hare International Airport, were undertaken during Richard J. Daley's tenure. In 1979, Jane Byrne, the city's first female mayor, was elected. She was notable for temporarily moving into the crime-ridden Cabrini-Green housing project and for leading Chicago's school system out of a financial crisis. 1980 to present. In 1983, Harold Washington became the first black mayor of Chicago. Washington's first term in office directed attention to poor and previously neglected minority neighborhoods. He was re‑elected in 1987 but died of a heart attack soon after. Washington was succeeded by 6th ward alderperson Eugene Sawyer, who was elected by the Chicago City Council and served until a special election. Richard M. Daley, son of Richard J. Daley, was elected in 1989. His accomplishments included improvements to parks and creating incentives for sustainable development, as well as closing Meigs Field in the middle of the night and destroying the runways. After successfully running for re-election five times, and becoming Chicago's longest-serving mayor, Richard M. Daley declined to run for a seventh term. In 1992, a construction accident near the Kinzie Street Bridge produced a breach connecting the Chicago River to a tunnel below, which was part of an abandoned freight tunnel system extending throughout the downtown Loop district. The tunnels filled with of water, affecting buildings throughout the district and forcing a shutdown of electrical power. The area was shut down for three days and some buildings did not reopen for weeks; losses were estimated at $1.95 billion. On February 23, 2011, Rahm Emanuel, a former White House Chief of Staff and member of the House of Representatives, won the mayoral election. Emanuel was sworn in as mayor on May 16, 2011, and won re-election in 2015. Lori Lightfoot, the city's first African American woman mayor and its first openly LGBTQ mayor, was elected to succeed Emanuel as mayor in 2019. All three city-wide elective offices were held by women (and women of color) for the first time in Chicago history: in addition to Lightfoot, the city clerk was Anna Valencia and the city treasurer was Melissa Conyears-Ervin. On May 15, 2023, Brandon Johnson assumed office as the 57th mayor of Chicago. Geography. Topography. Chicago is located in northeastern Illinois on the southwestern shores of freshwater Lake Michigan. It is the principal city in the Chicago Metropolitan Area, situated in both the Midwestern United States and the Great Lakes region. The city rests on a continental divide at the site of the Chicago Portage, connecting the Mississippi River and the Great Lakes watersheds. In addition to it lying beside Lake Michigan, two rivers—the Chicago River in downtown and the Calumet River in the industrial far South Side—flow either entirely or partially through the city. Chicago's history and economy are closely tied to its proximity to Lake Michigan. While the Chicago River historically handled much of the region's waterborne cargo, today's huge lake freighters use the city's Lake Calumet Harbor on the South Side. The lake also provides another positive effect: moderating Chicago's climate, making waterfront neighborhoods slightly warmer in winter and cooler in summer. When Chicago was founded in 1837, most of the early building was around the mouth of the Chicago River, as can be seen on a map of the city's original 58 blocks. The overall grade of the city's central, built-up areas is relatively consistent with the natural flatness of its overall natural geography, generally exhibiting only slight differentiation otherwise. The average land elevation is above sea level. While measurements vary somewhat, the lowest points are along the lake shore at , while the highest point, at , is the morainal ridge of Blue Island in the city's far south side. Lake Shore Drive runs adjacent to a large portion of Chicago's waterfront. Some of the parks along the waterfront include Lincoln Park, Grant Park, Burnham Park, and Jackson Park. There are 24 public beaches across of the waterfront. Landfill extends into portions of the lake providing space for Navy Pier, Northerly Island, the Museum Campus, and large portions of the McCormick Place Convention Center. Most of the city's high-rise commercial and residential buildings are close to the waterfront. An informal name for the entire Chicago metropolitan area is "Chicagoland", which generally means the city and all its suburbs, though different organizations have slightly different definitions. Communities. Major sections of the city include the central business district, called the Loop, and the North, South, and West Sides. The three sides of the city are represented on the Flag of Chicago by three horizontal white stripes. The North Side is the most-densely-populated residential section of the city, and many high-rises are located on this side of the city along the lakefront. The South Side is the largest section of the city, encompassing roughly 60% of the city's land area. The South Side contains most of the facilities of the Port of Chicago. In the late-1920s, sociologists at the University of Chicago subdivided the city into 77 distinct community areas, which can further be subdivided into over 200 informally defined neighborhoods. Streetscape. Chicago's streets were laid out in a street grid that grew from the city's original townsite plot, which was bounded by Lake Michigan on the east, North Avenue on the north, Wood Street on the west, and 22nd Street on the south. Streets following the Public Land Survey System section lines later became arterial streets in outlying sections. As new additions to the city were platted, city ordinance required them to be laid out with eight streets to the mile in one direction and sixteen in the other direction, about one street per 200 meters in one direction and one street per 100 meters in the other direction. The grid's regularity provided an efficient means of developing new real estate property. A scattering of diagonal streets, many of them originally Native American trails, also cross the city (Elston, Milwaukee, Ogden, Lincoln, etc.). Many additional diagonal streets were recommended in the Plan of Chicago, but only the extension of Ogden Avenue was ever constructed. In 2021, Chicago was ranked the fourth-most walkable large city in the United States. Many of the city's residential streets have a wide patch of grass or trees between the street and the sidewalk itself. This helps to keep pedestrians on the sidewalk further away from the street traffic. Chicago's Western Avenue is the longest continuous urban street in the world. Other notable streets include Michigan Avenue, State Street, 95th Street, Cicero Avenue, Clark Street, and Belmont Avenue. The City Beautiful movement inspired Chicago's boulevards and parkways. Architecture. The destruction caused by the Great Chicago Fire led to the largest building boom in the history of the nation. In 1885, the first steel-framed high-rise building, the Home Insurance Building, rose in the city as Chicago ushered in the skyscraper era, which would then be followed by many other cities around the world. Today, Chicago's skyline is among the world's tallest and densest. Some of the United States' tallest towers are located in Chicago; Willis Tower (formerly Sears Tower) is the second tallest building in the Western Hemisphere after One World Trade Center, and Trump International Hotel and Tower is the third tallest in the country. The Loop's historic buildings include the Chicago Board of Trade Building, the Fine Arts Building, 35 East Wacker, and the Chicago Building, 860-880 Lake Shore Drive Apartments by Mies van der Rohe. Many other architects have left their impression on the Chicago skyline such as Daniel Burnham, Louis Sullivan, Charles B. Atwood, John Root, and Helmut Jahn. The Merchandise Mart, once the largest building in the world, had its own zip code until 2008, and stands near the junction of the North and South branches of the Chicago River. Presently, the four tallest buildings in the city are Willis Tower (formerly the Sears Tower, also a building with its own zip code), Trump International Hotel and Tower, the Aon Center (previously the Standard Oil Building), and the John Hancock Center. Industrial districts, such as some areas on the South Side, the areas along the Chicago Sanitary and Ship Canal, and the Northwest Indiana area are clustered. Chicago gave its name to the Chicago School and was home to the Prairie School, two movements in architecture. Multiple kinds and scales of houses, townhouses, condominiums, and apartment buildings can be found throughout Chicago. Large swaths of the city's residential areas away from the lake are characterized by brick bungalows built from the early 20th century through the end of World War II. Chicago is also a prominent center of the Polish Cathedral style of church architecture. The Chicago suburb of Oak Park was home to famous architect Frank Lloyd Wright, who had designed The Robie House located near the University of Chicago. A popular tourist activity is to take an architecture boat tour along the Chicago River. Monuments and public art. Chicago is famous for its outdoor public art with donors establishing funding for such art as far back as Benjamin Ferguson's 1905 trust. A number of Chicago's public art works are by modern figurative artists. Among these are Chagall's Four Seasons; the Chicago Picasso; Miró's Chicago; Calder's Flamingo; Oldenburg's Batcolumn; Moore's Large Interior Form, 1953–54, Man Enters the Cosmos and Nuclear Energy; Dubuffet's Monument with Standing Beast, Abakanowicz's Agora; and Anish Kapoor's Cloud Gate which has become an icon of the city. Some events which shaped the city's history have also been memorialized by art works, including the Great Northern Migration (Saar) and the centennial of statehood for Illinois. Finally, two fountains near the Loop also function as monumental works of art: Plensa's Crown Fountain as well as Burnham and Bennett's Buckingham Fountain. Climate. The city mostly lies within the typical hot-summer humid continental climate (Köppen: "Dfa"), and experiences four distinct seasons. Summers are hot and humid, with frequent heat waves. The July daily average temperature is , with afternoon temperatures peaking at . In a normal summer, temperatures reach at least on 17 days, with lakefront locations staying cooler when winds blow off the lake. Winters are relatively cold and snowy. Blizzards do occur, such as in winter 2011. There are many sunny but cold days. The normal winter high from December through March is about . January and February are the coldest months. A polar vortex in January 2019 nearly broke the city's cold record of , which was set on January 20, 1985. Measurable snowfall can continue through the first or second week of April. Spring and autumn are mild, short seasons, typically with low humidity. Dew point temperatures in the summer range from an average of in June to in July. They can reach nearly , such as during the July 2019 heat wave. The city lies within USDA plant hardiness zone 6a, transitioning to 5b in the suburbs. According to the National Weather Service, Chicago's highest official temperature reading of was recorded on July 24, 1934. Midway Airport reached one day prior and recorded a heat index of during the 1995 heatwave. The lowest official temperature of was recorded on January 20, 1985, at O'Hare Airport. Most of the city's rainfall is brought by thunderstorms, averaging 38 a year. The region is prone to severe thunderstorms during the spring and summer which can produce large hail, damaging winds, and occasionally tornadoes. Notably, the F4 Oak Lawn tornado moved through the South Side of the city on April 21, 1967, moving onto Lake Michigan as a waterspout. Downtown Chicago was struck by an F3 tornado on May 6, 1876, again moving out over Lake Michigan. Like other major cities, Chicago experiences an urban heat island, making the city and its suburbs milder than surrounding rural areas, especially at night and in winter. The proximity to Lake Michigan tends to keep the Chicago lakefront somewhat cooler in summer and less brutally cold in winter than inland parts of the city and suburbs away from the lake, which is sufficient to give lakefront areas such as Northerly Island a humid subtropical ("Cfa") climate using Köppen's winter isotherm (as opposed to the firmly continental climate of inland areas such as Midway and O'Hare International Airports), even though those areas are still continental ("Dca") under Trewartha due to winters averaging below . Northeast winds from wintertime cyclones departing south of the region sometimes bring the city lake-effect snow. Time zone. As in the rest of the state of Illinois, Chicago forms part of the Central Time Zone. The border with the Eastern Time Zone is located a short distance to the east, used in Michigan and certain parts of Indiana. Demographics. During its first hundred years, Chicago was one of the fastest-growing cities in the world. When founded in 1833, fewer than 200 people had settled on what was then the American frontier. By the time of its first census, seven years later, the population had reached over 4,000. In the forty years from 1850 to 1890, the city's population grew from slightly under 30,000 to over 1 million. At the end of the 19th century, Chicago was the 5th-most populous city in the world, and the largest of the cities that did not exist at the dawn of the century. Within sixty years of the Great Chicago Fire of 1871, the population went from about 300,000 to over 3 million, and reached its highest ever recorded population of 3.6 million for the 1950 census. From the last two decades of the 19th century, Chicago was the destination of waves of immigrants from Ireland, Southern, Central and Eastern Europe, including Italians, Jews, Russians, Poles, Greeks, Lithuanians, Bulgarians, Albanians, Romanians, Turks, Croatians, Serbs, Bosnians, Montenegrins and Czechs. To these ethnic groups, the basis of the city's industrial working class, were added an additional influx of African Americans from the American South—with Chicago's black population doubling between 1910 and 1920 and doubling again between 1920 and 1930. Chicago has a significant Bosnian population, many of whom arrived in the 1990s and 2000s. In the 1920s and 1930s, the great majority of African Americans moving to Chicago settled in a so‑called "Black Belt" on the city's South Side. A large number of blacks also settled on the West Side. By 1930, two-thirds of Chicago's black population lived in sections of the city which were 90% black in racial composition. Around that time, a lesser known fact about African Americans on the North Side is that the block of 4600 Winthrop Avenue in Uptown was the only block African Americans could live or open establishments. Chicago's South Side emerged as United States second-largest urban black concentration, following New York's Harlem. In 1990, Chicago's South Side and the adjoining south suburbs constituted the largest black majority region in the entire United States. Since the 1980s, Chicago has had a massive exodus of African Americans (primarily from the South and West sides) to its suburbs or outside its metropolitan area. The above average crime and cost of living were leading reasons for the fast declining African American population in Chicago. Most of Chicago's foreign-born population were born in Mexico, Poland or India. A 2020 study estimated the total Jewish population of the Chicago metropolitan area, both religious and irreligious, at 319,500. Chicago's population declined in the latter half of the 20th century, from over 3.6 million in 1950 down to under 2.7 million by 2010. By the time of the official census count in 1990, it was overtaken by Los Angeles as the United States' second largest city. The city has seen a rise in population for the 2000 census and after a decrease in 2010, it rose again for the 2020 census. According to U.S. census estimates , Chicago's largest racial or ethnic group is non-Hispanic White at 32.8% of the population, Blacks at 30.1% and the Hispanic population at 29.0% of the population. Chicago has the third-largest LGBT population in the United States. In 2018, the Chicago Department of Health, estimated 7.5% of the adult population, approximately 146,000 Chicagoans, were LGBTQ. In 2015, roughly 4% of the population identified as LGBT. Since the 2013 legalization of same-sex marriage in Illinois, over 10,000 same-sex couples have wed in Cook County, a majority of them in Chicago. Chicago became a "de jure" sanctuary city in 2012 when Mayor Rahm Emanuel and the City Council passed the Welcoming City Ordinance. According to the U.S. Census Bureau's American Community Survey data estimates for 2022, the median income for a household in the city was $70,386, and the per capita income was $45,449. Male full-time workers had a median income of $68,870 versus $60,987 for females. About 17.2% of the population lived below the poverty line. In 2018, Chicago ranked seventh globally for the highest number of ultra-high-net-worth residents with roughly 3,300 residents worth more than $30 million. According to the 2022 American Community Survey, the specific ancestral groups having 10,000 or more persons in Chicago were: Persons who did not report or classify an ancestry were 548,790. Religion. According to a 2014 study by the Pew Research Center, Christianity is the most prevalently practiced religion in Chicago (71%), with the city being the fourth-most religious metropolis in the United States after Dallas, Atlanta and Houston. Roman Catholicism and Protestantism are the largest branches (34% and 35% respectively), followed by Eastern Orthodoxy and Jehovah's Witnesses with 1% each. Chicago also has a sizable non-Christian population. Non-Christian groups include Irreligious (22%), Judaism (3%), Islam (2%), Buddhism (1%) and Hinduism (1%). Chicago is the headquarters of several religious denominations, including the Evangelical Covenant Church and the Evangelical Lutheran Church in America. It is the seat of several dioceses. The Fourth Presbyterian Church is one of the largest Presbyterian congregations in the United States based on memberships. Since the 20th century Chicago has also been the headquarters of the Assyrian Church of the East. In 2014 the Catholic Church was the largest individual Christian denomination (34%), with the Roman Catholic Archdiocese of Chicago being the largest Catholic jurisdiction. Evangelical Protestantism form the largest theological Protestant branch (16%), followed by Mainline Protestants (11%), and historically Black churches (8%). Among denominational Protestant branches, Baptists formed the largest group in Chicago (10%); followed by Nondenominational (5%); Lutherans (4%); and Pentecostals (3%). Non-Christian faiths accounted for 7% of the religious population in 2014. Judaism has at least 261,000 adherents which is 3% of the population. A 2020 study estimated the total Jewish population of the Chicago metropolitan area, both religious and irreligious, at 319,500. The first two Parliament of the World's Religions in 1893 and 1993 were held in Chicago. Many international religious leaders have visited Chicago, including Mother Teresa, the Dalai Lama and Pope John Paul II in 1979. Pope Leo XIV was born in Chicago in 1955 and graduated from the Catholic Theological Union in Hyde Park. Economy. Chicago has the third-largest gross metropolitan product in the United States—about $670.5 billion according to September 2017 estimates. The city has also been rated as having the most balanced economy in the United States, due to its high level of diversification. The Chicago metropolitan area has the third-largest science and engineering work force of any metropolitan area in the nation. Chicago was the base of commercial operations for industrialists John Crerar, John Whitfield Bunn, Richard Teller Crane, Marshall Field, John Farwell, Julius Rosenwald, and many other commercial visionaries who laid the foundation for Midwestern and global industry. Chicago is a major world financial center, with the second-largest central business district in the United States, following Midtown Manhattan. The city is the seat of the Federal Reserve Bank of Chicago, the Bank's Seventh District. The city has major financial and futures exchanges, including the Chicago Stock Exchange, the Chicago Board Options Exchange (CBOE), and the Chicago Mercantile Exchange (the "Merc"), which is owned, along with the Chicago Board of Trade (CBOT), by Chicago's CME Group. In 2017, Chicago exchanges traded 4.7 billion in derivatives. Chase Bank has its commercial and retail banking headquarters in Chicago's Chase Tower. Academically, Chicago has been influential through the Chicago school of economics, which fielded 12 Nobel Prize winners. The city and its surrounding metropolitan area contain the third-largest labor pool in the United States with about 4.63 million workers. Illinois is home to 66 "Fortune" 1000 companies, including those in Chicago. The city of Chicago also hosts 12 "Fortune" Global 500 companies and 17 "Financial Times" 500 companies. The city claims three Dow 30 companies: aerospace giant Boeing, which moved its headquarters from Seattle to the Chicago Loop in 2001; McDonald's; and Walgreens Boots Alliance. For six consecutive years from 2013 through 2018, Chicago was ranked the nation's top metropolitan area for corporate relocations. However, three "Fortune" 500 companies left Chicago in 2022, leaving the city with 35, still second to New York City. Manufacturing, printing, publishing, and food processing also play major roles in the city's economy. Several medical products and services companies are based in the Chicago area, including Baxter International, Boeing, Abbott Laboratories, and the Healthcare division of General Electric. Prominent food companies based in Chicago include the world headquarters of Conagra, Ferrara Candy Company, Kraft Heinz, McDonald's, Mondelez International, and Quaker Oats. Chicago has been a hub of the retail sector since its early development, with Montgomery Ward, Sears, and Marshall Field's. Today the Chicago metropolitan area is the headquarters of several retailers, including Walgreens, Sears, Ace Hardware, Claire's, ULTA Beauty, and Crate & Barrel. Late in the 19th century, Chicago was part of the bicycle craze, with the Western Wheel Company, which introduced stamping to the production process and significantly reduced costs, while early in the 20th century, the city was part of the automobile revolution, hosting the Brass Era car builder Bugmobile, which was founded there in 1907. Chicago was also the site of the Schwinn Bicycle Company. Chicago is a major world convention destination. The city's main convention center is McCormick Place. With its four interconnected buildings, it is the largest convention center in the nation and third-largest in the world. Chicago also ranks third in the U.S. (behind Las Vegas and Orlando) in number of conventions hosted annually. Chicago's minimum wage for non-tipped employees is one of the highest in the nation and reached $15 in 2021. Culture and contemporary life. The city's waterfront location and nightlife attracts residents and tourists alike. Over a third of the city population is concentrated in the lakefront neighborhoods from Rogers Park in the north to South Shore in the south. The city has many upscale dining establishments as well as many ethnic restaurant districts. These districts include the Mexican American neighborhoods, such as Pilsen along 18th street, and "La Villita" along 26th Street; the Puerto Rican enclave of Paseo Boricua in the Humboldt Park neighborhood; Greektown, along South Halsted Street, immediately west of downtown; Little Italy, along Taylor Street; Chinatown in Armour Square; Polish Patches in West Town; Little Seoul in Albany Park around Lawrence Avenue; Little Vietnam near Broadway in Uptown; and the Desi area, along Devon Avenue in West Ridge. Downtown is the center of Chicago's financial, cultural, governmental, and commercial institutions and the site of Grant Park and many of the city's skyscrapers. Many of the city's financial institutions, such as the CBOT and the Federal Reserve Bank of Chicago, are located within a section of downtown called "The Loop", which is an eight-block by five-block area of city streets that is encircled by elevated rail tracks. The term "The Loop" is largely used by locals to refer to the entire downtown area as well. The central area includes the Near North Side, the Near South Side, and the Near West Side, as well as the Loop. These areas contribute famous skyscrapers, abundant restaurants, shopping, museums, Soldier Field, convention facilities, parkland, and beaches. Lincoln Park contains Lincoln Park Zoo and Lincoln Park Conservatory. The River North Gallery District features the nation's largest concentration of contemporary art galleries outside of New York City. Lake View is home to Boystown, the city's large LGBT nightlife and culture center. The Chicago Pride Parade, held the last Sunday in June, is one of the world's largest with over a million people in attendance. North Halsted Street is the main thoroughfare of Boystown. The South Side neighborhood of Hyde Park is the home of former U.S. President Barack Obama. It also contains the University of Chicago, ranked one of the world's top ten universities, and the Museum of Science and Industry. The long Burnham Park stretches along the waterfront of the South Side. Two of the city's largest parks are also located on this side of the city: Jackson Park, bordering the waterfront, hosted the World's Columbian Exposition in 1893, and is the site of the aforementioned museum; and slightly west sits Washington Park. The two parks themselves are connected by a wide strip of parkland called the Midway Plaisance, running adjacent to the University of Chicago. The South Side hosts one of the city's largest parades, the annual African American Bud Billiken Parade and Picnic, which travels through Bronzeville to Washington Park. Ford Motor Company has an automobile assembly plant on the South Side in Hegewisch, and most of the facilities of the Port of Chicago are also on the South Side. The West Side holds the Garfield Park Conservatory, one of the largest collections of tropical plants in any U.S. city. Prominent Latino cultural attractions found here include Humboldt Park's Institute of Puerto Rican Arts and Culture and the annual Puerto Rican People's Parade, as well as the National Museum of Mexican Art and St. Adalbert's Church in Pilsen. The Near West Side holds the University of Illinois at Chicago and was once home to Oprah Winfrey's Harpo Studios, the site of which has been rebuilt as the global headquarters of McDonald's. The city's distinctive accent, made famous by its use in classic films like "The Blues Brothers" and television programs like the "Saturday Night Live" skit "Bill Swerski's Superfans", is an advanced form of Inland Northern American English. This dialect can also be found in other cities bordering the Great Lakes such as Cleveland, Milwaukee, Detroit, and Rochester, New York, and most prominently features a rearrangement of certain vowel sounds, such as the short 'a' sound as in "cat", which can sound more like "kyet" to outsiders. The accent remains well associated with the city. Entertainment and the arts. Renowned Chicago theater companies include the Goodman Theatre in the Loop; the Steppenwolf Theatre Company and Victory Gardens Theater in Lincoln Park; and the Chicago Shakespeare Theater at Navy Pier. Broadway In Chicago offers Broadway-style entertainment at five theaters: the Nederlander Theatre, CIBC Theatre, Cadillac Palace Theatre, Auditorium Building of Roosevelt University, and Broadway Playhouse at Water Tower Place. Polish language productions for Chicago's large Polish speaking population can be seen at the historic Gateway Theatre in Jefferson Park. Since 1968, the Joseph Jefferson Awards are given annually to acknowledge excellence in theater in the Chicago area. Chicago's theater community spawned modern improvisational theater, and includes the prominent groups The Second City and I.O. (formerly ImprovOlympic). The Chicago Symphony Orchestra (CSO) performs at Symphony Center, and is recognized as one of the best orchestras in the world. Also performing regularly at Symphony Center is the Chicago Sinfonietta, a more diverse and multicultural counterpart to the CSO. In the summer, many outdoor concerts are given in Grant Park and Millennium Park. Ravinia Festival, located north of Chicago, is the summer home of the CSO, and is a favorite destination for many Chicagoans. The Civic Opera House is home to the Lyric Opera of Chicago. The Lithuanian Opera Company of Chicago was founded by Lithuanian Chicagoans in 1956, and presents operas in Lithuanian. The Joffrey Ballet and Chicago Festival Ballet perform in various venues, including the Harris Theater in Millennium Park. Chicago has several other contemporary and jazz dance troupes, such as the Hubbard Street Dance Chicago and Chicago Dance Crash. Other live-music genre which are part of the city's cultural heritage include Chicago blues, Chicago soul, jazz, and gospel. The city is the birthplace of house music (a popular form of electronic dance music) and industrial music, and is the site of an influential hip hop scene. In the 1980s and 90s, the city was the global center for house and industrial music, two forms of music created in Chicago, as well as being popular for alternative rock, punk, and new wave. The city has been a center for rave culture, since the 1980s. A flourishing independent rock music culture brought forth Chicago indie. Annual festivals feature various acts, such as Lollapalooza and the Pitchfork Music Festival. Lollapalooza originated in Chicago in 1991 and at first travelled to many cities, but as of 2005 its home has been Chicago. A 2007 report on the Chicago music industry by the University of Chicago Cultural Policy Center ranked Chicago third among metropolitan U.S. areas in "size of music industry" and fourth among all U.S. cities in "number of concerts and performances". Chicago has a distinctive fine art tradition. For much of the twentieth century, it nurtured a strong style of figurative surrealism, as in the works of Ivan Albright and Ed Paschke. In 1968 and 1969, members of the Chicago Imagists, such as Roger Brown, Leon Golub, Robert Lostutter, Jim Nutt, and Barbara Rossi produced bizarre representational paintings. Henry Darger is one of the most celebrated figures of outsider art. Tourism. , Chicago attracted 50.17 million domestic leisure travelers, 11.09 million domestic business travelers and 1.308 million overseas visitors. These visitors contributed more than billion to Chicago's economy. Upscale shopping along the Magnificent Mile and State Street, thousands of restaurants, as well as Chicago's eminent architecture, continue to draw tourists. The city is the United States' third-largest convention destination. A 2017 study by Walk Score ranked Chicago the sixth-most walkable of fifty largest cities in the United States. Most conventions are held at McCormick Place, just south of Soldier Field. Navy Pier, located just east of Streeterville, is long and houses retail stores, restaurants, museums, exhibition halls and auditoriums. Chicago was the first city in the world to ever erect a Ferris wheel. The Willis Tower (formerly named Sears Tower) is a popular destination for tourists. Museums. Among the city's museums are the Adler Planetarium & Astronomy Museum, the Field Museum of Natural History, and the Shedd Aquarium. The Museum Campus joins the southern section of Grant Park, which includes the renowned Art Institute of Chicago. Buckingham Fountain anchors the downtown park along the lakefront. The University of Chicago's Institute for the Study of Ancient Cultures, West Asia & North Africa has an extensive collection of ancient Egyptian and Near Eastern archaeological artifacts. Other museums and galleries in Chicago include the Chicago History Museum, the Driehaus Museum, the DuSable Museum of African American History, the Museum of Contemporary Art, the Peggy Notebaert Nature Museum, the Polish Museum of America, the Museum of Broadcast Communications, the Chicago Architecture Foundation, and the Museum of Science and Industry. Cuisine. Chicago lays claim to a large number of regional specialties that reflect the city's ethnic and working-class roots. Included among these are its nationally renowned deep-dish pizza; this style is said to have originated at Pizzeria Uno. The Chicago-style thin crust is also popular in the city. Certain Chicago pizza favorites include Lou Malnati's and Giordano's. The Chicago-style hot dog, typically an all-beef hot dog, is loaded with an array of toppings that often includes pickle relish, yellow mustard, pickled sport peppers, tomato wedges, dill pickle spear and topped off with celery salt on a poppy seed bun. Enthusiasts of the Chicago-style hot dog frown upon the use of ketchup as a garnish, but may prefer to add giardiniera. A distinctly Chicago sandwich, the Italian beef sandwich is thinly sliced beef simmered in au jus and served on an Italian roll with sweet peppers or spicy giardiniera. A popular modification is the Combo—an Italian beef sandwich with the addition of an Italian sausage. The Maxwell Street Polish is a grilled or deep-fried kielbasa—on a hot dog roll, topped with grilled onions, yellow mustard, and hot sport peppers. Chicken Vesuvio is roasted bone-in chicken cooked in oil and garlic next to garlicky oven-roasted potato wedges and a sprinkling of green peas. The Puerto Rican-influenced jibarito is a sandwich made with flattened, fried green plantains instead of bread. The mother-in-law is a tamale topped with chili and served on a hot dog bun. The tradition of serving the Greek dish saganaki while aflame has its origins in Chicago's Greek community. The appetizer, which consists of a square of fried cheese, is doused with Metaxa and flambéed table-side. Chicago-style barbecue features hardwood smoked rib tips and hot links which were traditionally cooked in an aquarium smoker, a Chicago invention. Annual festivals feature various Chicago signature dishes, such as Taste of Chicago and the Chicago Food Truck Festival. One of the world's most decorated restaurants and a recipient of three Michelin stars, Alinea is located in Chicago. Well-known chefs who have had restaurants in Chicago include: Charlie Trotter, Rick Tramonto, Grant Achatz, and Rick Bayless. In 2003, "Robb Report" named Chicago the country's "most exceptional dining destination". Literature. Chicago literature finds its roots in the city's tradition of lucid, direct journalism, lending to a strong tradition of social realism. In the "Encyclopedia of Chicago", Northwestern University Professor Bill Savage describes Chicago fiction as prose which tries to "capture the essence of the city, its spaces and its people." The challenge for early writers was that Chicago was a frontier outpost that transformed into a global metropolis in the span of two generations. Narrative fiction of that time, much of it in the style of "high-flown romance" and "genteel realism", needed a new approach to describe the urban social, political, and economic conditions of Chicago. Nonetheless, Chicagoans worked hard to create a literary tradition that would stand the test of time, and create a "city of feeling" out of concrete, steel, vast lake, and open prairie. Much notable Chicago fiction focuses on the city itself, with social criticism keeping exultation in check. At least three short periods in the history of Chicago have had a lasting influence on American literature. These include from the time of the Great Chicago Fire to about 1900, what became known as the Chicago Literary Renaissance in the 1910s and early 1920s, and the period of the Great Depression through the 1940s. What would become the influential "Poetry" magazine was founded in 1912 by Harriet Monroe, who was working as an art critic for the "Chicago Tribune". The magazine discovered such poets as Gwendolyn Brooks, James Merrill, and John Ashbery. T. S. Eliot's first professionally published poem, "The Love Song of J. Alfred Prufrock", was first published by "Poetry". Contributors have included Ezra Pound, William Butler Yeats, William Carlos Williams, Langston Hughes, and Carl Sandburg, among others. The magazine was instrumental in launching the Imagist and Objectivist poetic movements. From the 1950s through 1970s, American poetry continued to evolve in Chicago. In the 1980s, a modern form of poetry performance began in Chicago, the poetry slam. Sports. The city has two Major League Baseball (MLB) teams: the Chicago Cubs of the National League play in Wrigley Field on the North Side; and the Chicago White Sox of the American League play in Rate Field on the South Side. The two teams have faced each other in a World Series only once, in 1906. The Cubs are the oldest Major League Baseball team to have never changed their city; they have played in Chicago since 1871. They had the dubious honor of having the longest championship drought in American professional sports, failing to win a World Series between 1908 and 2016. The White Sox have played on the South Side continuously since 1901. They have won three World Series titles (1906, 1917, 2005) and six American League pennants, including the first in 1901. The Chicago Bears, one of the last two remaining charter members of the National Football League (NFL), have won nine NFL Championships, including the 1985 Super Bowl XX. The Bears play their home games at Soldier Field. The Chicago Bulls of the National Basketball Association (NBA) is one of the most recognized basketball teams in the world. During the 1990s, with Michael Jordan leading them, the Bulls won six NBA championships in eight seasons. The Chicago Blackhawks of the National Hockey League (NHL) began play in 1926, and are one of the "Original Six" teams of the NHL. The Blackhawks have won six Stanley Cups, including in 2010, 2013, and 2015. Both the Bulls and the Blackhawks play at the United Center. Chicago Fire FC is a member of Major League Soccer (MLS) and plays at Soldier Field. The Fire have won one league title and four U.S. Open Cups, since their founding in 1997. In 1994, the United States hosted a successful FIFA World Cup with games played at Soldier Field. The Chicago Stars FC are a team in the National Women's Soccer League (NWSL). They previously played in Women's Professional Soccer (WPS), of which they were a founding member, before joining the NWSL in 2013. They play at SeatGeek Stadium in Bridgeview, Illinois. The Chicago Sky is a professional basketball team playing in the Women's National Basketball Association (WNBA). They play home games at the Wintrust Arena. The team was founded before the 2006 WNBA season began. The Chicago Marathon has been held each year since 1977 except for 1987, when a half marathon was run in its place. The Chicago Marathon is one of six World Marathon Majors. Five area colleges play in Division I conferences: two from major conferences—the DePaul Blue Demons (Big East Conference) and the Northwestern Wildcats (Big Ten Conference)—and three from other D1 conferences—the Chicago State Cougars (Northeast Conference); the Loyola Ramblers (Atlantic 10 Conference); and the UIC Flames (Missouri Valley Conference). Chicago has also entered into esports with the creation of the OpTic Chicago, a professional Call of Duty team that participates within the CDL. Parks and greenspace. When Chicago was incorporated in 1837, it chose the motto "Urbs in Horto", a Latin phrase which means "City in a Garden". Today, the Chicago Park District consists of more than 570 parks with over of municipal parkland. There are 31 sand beaches, a plethora of museums, two world-class conservatories, and 50 nature areas. Lincoln Park, the largest of the city's parks, covers and has over 20 million visitors each year, making it third in the number of visitors after Central Park in New York City, and the National Mall and Memorial Parks in Washington, D.C. There is a historic boulevard system, a network of wide, tree-lined boulevards which connect a number of Chicago parks. The boulevards and the parks were authorized by the Illinois legislature in 1869. A number of Chicago neighborhoods emerged along these roadways in the 19th century. The building of the boulevard system continued intermittently until 1942. It includes nineteen boulevards, eight parks, and six squares, along twenty-six miles of interconnected streets. The "Chicago Park Boulevard System Historic District" was listed on the National Register of Historic Places in 2018. With berths for more than 6,000 boats, the Chicago Park District operates the nation's largest municipal harbor system. In addition to ongoing beautification and renewal projects for the existing parks, a number of new parks have been added in recent years, such as the Ping Tom Memorial Park in Chinatown, DuSable Park on the Near North Side, and most notably, Millennium Park, which is in the northwestern corner of one of Chicago's oldest parks, Grant Park in the Chicago Loop. The wealth of greenspace afforded by Chicago's parks is further augmented by the Cook County Forest Preserves, a network of open spaces containing forest, prairie, wetland, streams, and lakes that are set aside as natural areas which lie along the city's outskirts, including both the Chicago Botanic Garden in Glencoe and the Brookfield Zoo in Brookfield. Washington Park is also one of the city's biggest parks; covering nearly . The park is listed on the National Register of Historic Places listings in South Side Chicago. Law and government. Government. The government of the City of Chicago is divided into executive and legislative branches. The mayor of Chicago is the chief executive, elected by general election for a term of four years, with no term limits. The incumbent mayor is Brandon Johnson. The mayor appoints commissioners and other officials who oversee the various departments. As well as the mayor, Chicago's clerk and treasurer are also elected citywide. The City Council is the legislative branch and is made up of 50 alderpersons, one elected from each ward in the city. The council takes official action through the passage of ordinances and resolutions and approves the city budget. The Chicago Police Department provides law enforcement and the Chicago Fire Department provides fire suppression and emergency medical services for the city and its residents. Civil and criminal law cases are heard in the Cook County Circuit Court of the State of Illinois court system, or in the Northern District of Illinois, in the federal system. In the state court, the public prosecutor is the Illinois state's attorney; in the Federal court it is the United States attorney. Politics. During much of the last half of the 19th century, Chicago's politics were dominated by a growing Democratic Party organization. During the 1880s and 1890s, Chicago had a powerful radical tradition with large and highly organized socialist, anarchist and labor organizations. For much of the 20th century, Chicago has been among the largest and most reliable Democratic strongholds in the United States; with Chicago's Democratic vote the state of Illinois has been "solid blue" in presidential elections since 1992. Even before then, it was not unheard of for Republican presidential candidates to win handily in downstate Illinois, only to lose statewide due to large Democratic margins in Chicago. The citizens of Chicago have not elected a Republican mayor since 1927, when William Thompson was voted into office. The strength of the party in the city is partly a consequence of Illinois state politics, where the Republicans have come to represent rural and farm concerns while the Democrats support urban issues such as Chicago's public school funding. Chicago contains less than 25% of the state's population, but it is split between eight of Illinois' 17 districts in the United States House of Representatives. All eight of the city's representatives are Democrats; only two Republicans have represented a significant portion of the city since 1973, for one term each: Robert P. Hanrahan from 1973 to 1975, and Michael Patrick Flanagan from 1995 to 1997. Machine politics persisted in Chicago after the decline of similar machines in other large U.S. cities. During much of that time, the city administration found opposition mainly from a liberal "independent" faction of the Democratic Party. The independents finally gained control of city government in 1983 with the election of Harold Washington (in office 1983–1987). From 1989 until May 16, 2011, Chicago was under the leadership of its longest-serving mayor, Richard M. Daley, the son of Richard J. Daley. Because of the dominance of the Democratic Party in Chicago, the Democratic primary vote held in the spring is generally more significant than the general elections in November for U.S. House and Illinois State seats. The aldermanic, mayoral, and other city offices are filled through nonpartisan elections with runoffs as needed. The city is home of former United States President Barack Obama and First Lady Michelle Obama; Barack Obama was formerly a state legislator representing Chicago and later a U.S. senator. The Obamas' residence is located near the University of Chicago in Kenwood on the city's south side. Crime. Chicago's crime rate in 2020 was 3,926 per 100,000 people. Chicago experienced major rises in violent crime in the 1920s, in the late 1960s, and in the 2020s. Chicago's biggest criminal justice challenges have changed little over the last 50 years, and statistically reside with homicide, armed robbery, gang violence, and aggravated battery. Chicago has a higher murder rate than the larger cities of New York and Los Angeles. However, while it has a large absolute number of crimes due to its size, Chicago is not among the top-25 most violent cities in the United States. Murder rates in Chicago vary greatly depending on the neighborhood in question. The neighborhoods of Englewood on the South Side, and Austin on the West side, for example, have homicide rates that are ten times higher than other parts of the city. Chicago has an estimated population of over 100,000 active gang members from nearly 60 factions. According to reports in 2013, "most of Chicago's violent crime comes from gangs trying to maintain control of drug-selling territories," and is specifically related to the activities of the Sinaloa Cartel, which is active in several American cities. Violent crime rates vary significantly by area of the city, with more economically developed areas having low rates, but other sections have much higher rates of crime. In 2013, the violent crime rate was 910 per 100,000 people; the murder rate was 10.4 per 100,000 – while high crime districts saw 38.9 murders, low crime districts saw 2.5 murders per 100,000. Chicago's long history of public corruption regularly draws the attention of federal law enforcement and federal prosecutors. From 2012 to 2019, 33 Chicago alderpersons were convicted on corruption charges, roughly one third of those elected in the time period. A report from the Office of the Legislative Inspector General noted that over half of Chicago's elected alderpersons took illegal campaign contributions in 2013. Most corruption cases in Chicago are prosecuted by the U.S. Attorney's office, as legal jurisdiction makes most offenses punishable as a federal crime. Education. Schools and libraries. Chicago Public Schools (CPS) is the governing body of the school district that contains over 600 public elementary and high schools citywide, including several selective-admission magnet schools. There are eleven selective enrollment high schools in the Chicago Public Schools, designed to meet the needs of Chicago's most academically advanced students. These schools offer a rigorous curriculum with mainly honors and Advanced Placement (AP) courses. Walter Payton College Prep High School is ranked number one in the city of Chicago and the state of Illinois. Chicago high school rankings are determined by the average test scores on state achievement tests. The district, with an enrollment exceeding 400,545 students (2013–2014 20th Day Enrollment), is the third-largest in the U.S. On September 10, 2012, teachers for the Chicago Teachers Union went on strike for the first time since 1987 over pay, resources, and other issues. According to data compiled in 2014, Chicago's "choice system", where students who test or apply and may attend one of a number of public high schools (there are about 130), sorts students of different achievement levels into different schools (high performing, middle performing, and low performing schools). Chicago has a network of Lutheran schools, and several private schools are run by other denominations and faiths, such as the Ida Crown Jewish Academy in West Ridge. The Roman Catholic Archdiocese of Chicago operates Catholic schools, including Jesuit preparatory schools and others. A number of private schools are completely secular. There is also the private Chicago Academy for the Arts, a high school focused on six different artistic disciplines, and the public Chicago High School for the Arts, a high school focused on five disciplines (visual arts, theatre, musical theatre, dance, and music). The Chicago Public Library system operates three regional libraries and 77 neighborhood branches, including the central library. Colleges and universities. Since the 1850s, Chicago has been a world center of higher education and research with several universities. These institutions consistently rank among the top "National Universities" in the United States, as determined by "U.S. News & World Report". Highly regarded universities in Chicago and the surrounding area are the University of Chicago; Northwestern University; Illinois Institute of Technology; Loyola University Chicago; DePaul University; Columbia College Chicago and the University of Illinois Chicago. Other notable schools include: Chicago State University; the School of the Art Institute of Chicago; East–West University; National Louis University; North Park University; Northeastern Illinois University; Robert Morris University Illinois; Roosevelt University; Saint Xavier University; Rush University; and Shimer College. William Rainey Harper, the first president of the University of Chicago, was instrumental in the creation of the junior college concept, establishing nearby Joliet Junior College as the first in the nation in 1901. His legacy continues with the multiple community colleges in the Chicago proper, including the seven City Colleges of Chicago: Richard J. Daley College, Kennedy–King College, Malcolm X College, Olive–Harvey College, Truman College, Harold Washington College, and Wilbur Wright College, in addition to the privately held MacCormac College. Chicago also has a high concentration of post-baccalaureate institutions, graduate schools, seminaries, and theological schools, such as the Adler School of Professional Psychology, The Chicago School the Erikson Institute, Institute for Clinical Social Work, Lutheran School of Theology at Chicago, Catholic Theological Union, Moody Bible Institute, and University of Chicago Divinity School. Media. Television. The Chicago metropolitan area is a major media hub and the third-largest media market in the United States, after New York City and Los Angeles. Each of the big five U.S. television networks, NBC, ABC, CBS, Fox and The CW, directly owns and operates a high-definition television station in Chicago (WMAQ 5, WLS 7, WBBM 2, WFLD 32 and WGN-TV 9, respectively). WGN is owned by the CW through a majority stake held in the network by the Nexstar Media Group, which acquired it from its founding owner Tribune Broadcasting in 2019. WGN was once carried, with some programming differences, as "WGN America" on cable and satellite TV nationwide and in parts of the Caribbean. WGN America eventually became NewsNation in 2021. Chicago has also been the home of several prominent talk shows, including "The Oprah Winfrey Show", "Steve Harvey Show", "The Rosie Show", "The Jerry Springer Show", "The Phil Donahue Show", "The Jenny Jones Show", and more. The city also has one PBS member station (its second: WYCC 20, removed its affiliation with PBS in 2017): WTTW 11, producer of shows such as "Sneak Previews", "The Frugal Gourmet", "Lamb Chop's Play-Along" and "The McLaughlin Group". , "Windy City Live" is Chicago's only daytime talk show, which is hosted by Val Warner and Ryan Chiaverini at ABC7 Studios with a live weekday audience. Since 1999, "Judge Mathis" also films his syndicated arbitration-based reality court show at the NBC Tower. Beginning in January 2019, "Newsy" began producing 12 of its 14 hours of live news programming per day from its new facility in Chicago. Television stations. Most of Chicago's television stations are owned and operated by the big television network companies. They are: Newspapers. Two major daily newspapers are published in Chicago: the "Chicago Tribune" and the "Chicago Sun-Times", with the Tribune having the larger circulation. There are also several regional and special-interest newspapers and magazines, such as "Chicago", the "Dziennik Związkowy" ("Polish Daily News"), "Draugas" (the Lithuanian daily newspaper), the "Chicago Reader", the "SouthtownStar", the "Chicago Defender", the "Daily Herald", "Newcity", "StreetWise" and the "Windy City Times". The entertainment and cultural magazine "Time Out Chicago" and "GRAB" magazine are also published in the city, as well as local music magazine "Chicago Innerview". In addition, Chicago is the home of satirical national news outlet, "The Onion", as well as its sister pop-culture publication, "The A.V. Club". Radio. Chicago has five 50,000 watt AM radio stations: the Audacy-owned WBBM and WSCR; the Tribune Broadcasting-owned WGN; the Cumulus Media-owned WLS; and the ESPN Radio-owned WMVP. Chicago is also home to a number of national radio shows, including "Beyond the Beltway" with Bruce DuMont on Sunday evenings. Chicago Public Radio produces nationally aired programs such as PRI's "This American Life" and NPR's "Wait Wait...Don't Tell Me!". Infrastructure. Transportation. Chicago is a major transportation hub in the United States. It is an important component in global distribution, as it is the third-largest inter-modal port in the world after Hong Kong and Singapore. The city of Chicago has a higher than average percentage of households without a car. In 2015, 26.5 percent of Chicago households were without a car, and increased slightly to 27.5 percent in 2016. The national average was 8.7 percent in 2016. Chicago averaged 1.12 cars per household in 2016, compared to a national average of 1.8. Parking. Due to Chicago's wheel tax, residents of Chicago who own a vehicle are required to purchase a Chicago City Vehicle Sticker. In established Residential Parking Zones, only local residents can purchase Zone-specific parking stickers for themselves and guests. Chicago since 2009 has relinquished rights to its public street parking. In 2008, as Chicago struggled to close a growing budget deficit, the city agreed to a 75-year, $1.16 billion deal to lease its parking meter system to an operating company created by Morgan Stanley, called Chicago Parking Meters LLC. Daley said the "agreement is very good news for the taxpayers of Chicago because it will provide more than $1 billion in net proceeds that can be used during this very difficult economy." The rights of the parking ticket lease end in 2081, and since 2022 have already recouped over $1.5 billion in revenue for Chicago Parking Meters LLC investors. Expressways. Seven mainline and four auxiliary interstate highways (55, 57, 65 (only in Indiana), 80 (also in Indiana), 88, 90 (also in Indiana), 94 (also in Indiana), 190, 290, 294, and 355) run through Chicago and its suburbs. Segments that link to the city center are named after influential politicians, with three of them named after former U.S. Presidents (Eisenhower, Kennedy, and Reagan) and one named after two-time Democratic candidate Adlai Stevenson. The Kennedy and Dan Ryan Expressways are the busiest state maintained routes in the entire state of Illinois. Transit systems. The Regional Transportation Authority (RTA) coordinates the operation of the three service boards: CTA, Metra, and Pace. Greyhound Lines provides inter-city bus service to and from the city at the Chicago Bus Station, and Chicago is also the hub for the Midwest network of Megabus (North America). Passenger rail. Amtrak long distance and commuter rail services originate from Union Station. Chicago is one of the largest hubs of passenger rail service in the nation. The services terminate in the San Francisco area, Washington, D.C., New York City, New Orleans, Portland, Seattle, Milwaukee, Quincy, St. Louis, Carbondale, Boston, Grand Rapids, Port Huron, Pontiac, Los Angeles, and San Antonio. Future service will terminate at Moline. An attempt was made in the early 20th century to link Chicago with New York City via the Chicago – New York Electric Air Line Railroad. Parts of this were built, but it was never completed. Bicycle and scooter sharing systems. In July 2013, the bicycle-sharing system Divvy was launched with 750 bikes and 75 docking stations It is operated by Lyft for the Chicago Department of Transportation. As of July 2019, Divvy operated 5800 bicycles at 608 stations, covering almost all of the city, excluding Pullman, Rosedale, Beverly, Belmont Cragin and Edison Park. In May 2019, The City of Chicago announced its Chicago's Electric Shared Scooter Pilot Program, scheduled to run from June 15 to October 15. The program started on June 15 with 10 different scooter companies, including scooter sharing market leaders Bird, Jump, Lime and Lyft. Each company was allowed to bring 250 electric scooters, although both Bird and Lime claimed that they experienced a higher demand for their scooters. The program ended on October 15, with nearly 800,000 rides taken. Freight rail. Chicago is the largest hub in the railroad industry. All five Class I railroads meet in Chicago. , severe freight train congestion caused trains to take as long to get through the Chicago region as it took to get there from the West Coast of the country (about 2 days). According to U.S. Department of Transportation, the volume of imported and exported goods transported via rail to, from, or through Chicago is forecast to increase nearly 150 percent between 2010 and 2040. CREATE, the Chicago Region Environmental and Transportation Efficiency Program, comprises about 70 programs, including crossovers, overpasses and underpasses, that intend to significantly improve the speed of freight movements in the Chicago area. Airports. Chicago is served by O'Hare International Airport, the world's busiest airport measured by airline operations, on the far Northwest Side, and Midway International Airport on the Southwest Side. In 2005, O'Hare was the world's busiest airport by aircraft movements and the second-busiest by total passenger traffic. Both O'Hare and Midway are owned and operated by the City of Chicago. Gary/Chicago International Airport and Chicago Rockford International Airport, located in Gary, Indiana and Rockford, Illinois, respectively, can serve as alternative Chicago area airports, however they do not offer as many commercial flights as O'Hare and Midway. In recent years the state of Illinois has been leaning towards building an entirely new airport in the Illinois suburbs of Chicago. The City of Chicago is the world headquarters for United Airlines, the world's third-largest airline. Port authority. The Port of Chicago consists of several major port facilities within the city of Chicago operated by the Illinois International Port District (formerly known as the Chicago Regional Port District). The central element of the Port District, Calumet Harbor, is maintained by the U.S. Army Corps of Engineers. Utilities. Electricity for most of northern Illinois is provided by Commonwealth Edison, also known as ComEd. Their service territory borders Iroquois County to the south, the Wisconsin border to the north, the Iowa border to the west and the Indiana border to the east. In northern Illinois, ComEd (a division of Exelon) operates the greatest number of nuclear generating plants in any U.S. state. Because of this, ComEd reports indicate that Chicago receives about 75% of its electricity from nuclear power. Recently, the city began installing wind turbines on government buildings to promote renewable energy. Natural gas is provided by Peoples Gas, a subsidiary of Integrys Energy Group, which is headquartered in Chicago. Domestic and industrial waste was once incinerated but it is now landfilled, mainly in the Calumet area. From 1995 to 2008, the city had a blue bag program to divert recyclable refuse from landfills. Because of low participation in the blue bag programs, the city began a pilot program for blue bin recycling like other cities. This proved successful and blue bins were rolled out across the city. Health systems. The Illinois Medical District is on the Near West Side. It includes Rush University Medical Center, ranked as the second best hospital in the Chicago metropolitan area by "U.S. News & World Report" for 2014–16, the University of Illinois Medical Center at Chicago, Jesse Brown VA Hospital, and John H. Stroger Jr. Hospital of Cook County, one of the busiest trauma centers in the nation. Two of the country's premier academic medical centers reside in Chicago, including Northwestern Memorial Hospital and the University of Chicago Medical Center. The Chicago campus of Northwestern University includes the Feinberg School of Medicine; Northwestern Memorial Hospital, which is ranked as the best hospital in the Chicago metropolitan area by "U.S. News & World Report" for 2017–18; the Shirley Ryan AbilityLab (formerly named the Rehabilitation Institute of Chicago), which is ranked the best U.S. rehabilitation hospital by "U.S. News & World Report"; the new Prentice Women's Hospital; and Ann & Robert H. Lurie Children's Hospital of Chicago. The University of Illinois College of Medicine at UIC is the second-largest medical school in the United States (2,600 students, including those at campuses in Peoria, Rockford and Urbana–Champaign). In addition, the Chicago Medical School and Loyola University Chicago's Stritch School of Medicine are located in the suburbs of North Chicago and Maywood, respectively. The Midwestern University Chicago College of Osteopathic Medicine is in Downers Grove. The American Medical Association, Accreditation Council for Graduate Medical Education, Accreditation Council for Continuing Medical Education, American Osteopathic Association, American Dental Association, Academy of General Dentistry, Academy of Nutrition and Dietetics, American Association of Nurse Anesthetists, American College of Surgeons, American Society for Clinical Pathology, American College of Healthcare Executives, the American Hospital Association, and Blue Cross and Blue Shield Association are all based in Chicago.
6887
670947
https://en.wikipedia.org/wiki?curid=6887
Cyrix 6x86
The Cyrix 6x86 is a line of sixth-generation, 32-bit x86 microprocessors designed and released by Cyrix in 1995. Cyrix, being a fabless company, had the chips manufactured by IBM and SGS-Thomson. The 6x86 was made as a direct competitor to Intel's Pentium microprocessor line, and was pin compatible. During the 6x86's development, the majority of applications (office software as well as games) performed almost entirely integer operations. The designers foresaw that future applications would most likely maintain this instruction focus. So, to optimize the chip's performance for what they believed to be the most likely application of the CPU, the integer execution resources received most of the transistor budget. This would later prove to be a strategic mistake, as the popularity of the P5 Pentium caused many software developers to hand-optimize code in assembly language, to take advantage of the P5 Pentium's tightly pipelined and lower latency FPU. For example, the highly anticipated first-person shooter "Quake" used highly optimized assembly code designed almost entirely around the P5 Pentium's FPU. As a result, the P5 Pentium significantly outperformed other CPUs in the game. After Cyrix was bought by National Semiconductor then later VIA, the 6x86 continued to be produced up until the early 2000s. History. The 6x86, previously under the codename "M1" was announced by Cyrix in October 1995. On release only the 100 MHz (P120+) version was available, but a 120 MHz (P150+) version was planned for mid-1995 with a 133 MHz (P166+) model later. The 100 MHz (P120+) 6x86 was available to OEMs for a price of $450 per chip in bulk quantities. In mid February 1996 Cyrix announced the P166+, P150+, and P133+ to be added to the 6x86 model line. IBM, who produced the chips, also announced they will be selling their own versions of the chips. The 6x86 P200+ was planned for the end of 1996, and ended up being released in June. The M2 (6x86MX) was first announced to be in development in mid 1996. It would have MMX and 32-bit optimization. The M2 would also have some of the same features as the Intel Pentium Pro such as register renaming, out-of-order completion, and speculative execution. Additionally it would have 64 KB of cache over the original 6x86 and Pentium Pro's 16 KB. In March 1997 when asked about when the M2 line of processors would begin shipping, Cyrix UK managing director Brendan Sherry stated, "I've read it's going to be May but we've said late Q2 all along and I'm pretty sure we'll make that." The 6x86L was first released in January 1997 to address the heat issues with the original 6x86 line. The 6x86L had a lower V-core voltage and required a split power plane voltage regulator. In April 1997 the first laptop to use the 6x86 processor was put on sale. They were sold by TigerDirect and had a 12.1in DSTN display, 16 MB of memory, 10x CD-ROM, 1.3 GB hard disk drive, and cost $1,899 for the base price. Later by the end of May 1997 on the 27th, Cyrix said they would announce details of the new chip line (6x86MX) the day before Computex in June 1997. For the low end of the series, the PR166 6x86MX was available for $190 with higher end PR200 and PR233 versions available for $240 and $320. IBM being the producer of Cyrix's chips, would also sell their own version. Cyrix hoped to ship tens of thousands within June 1997 with up to 1 million by the end of the year. Cyrix also expected to release a 266 MHz chip by the end of 1997 and a 300 MHz in the first quarter of 1998. They had slightly better floating point performance, which cut adding and multiply times by a third, but it was still slower than the Intel Pentium. The M2 also had full MMX instructions, 64 KB of cache over the original 16 KB, and had a lower core voltage of 2.5V over 3.3V of the original 6x86 line. National Semiconductor acquired Cyrix in July 1997. National Semiconductor was not interested in high performance processors but rather system on a chip devices, and wanted to shift the focus of Cyrix to the MediaGX line. In January 1998 National Semiconductors produced a 6x86MX processor on a 0.25 micron process technology. This reduced the chip size from 150 square millimeters to 88. National shifted their production of the MII and MediaGX to 0.25 by August. In September 1998 IBM's licensing partnership with Cyrix was said to be ended by National Semiconductors. This was due to National wanting to increase production of Cyrix chips in their own facilities, and because having IBM produce Cyrix's chips was causing issues such as profit losses due to IBM frequently pricing their versions of Cyrix's chips lower. National would be paying $50–55 million to IBM to end the partnership, which would end the following April. National would then be moving chip production to their own facility in South Portland, Maine. The Cyrix MII was released in May 1998. These chips were not exciting like people had hoped, as they were just a rebranding of the 6x86MX. In December these chips cost $80 for a MII-333, $59 for a MII-300, $55 for a MII-266, and $48 for a MII-233. In May 1999 National Semiconductor decided to leave the PC chip market due to significant losses, and put the Cyrix CPU division up for sale. VIA bought the Cyrix line in June 1999, and ended the development of high performance processors. The MII-433GP would be the last processor produced by Cyrix. Additionally after VIA's acquisition, the 6x86/L was discontinued, but the 6x86MX/MII line continued to be sold by VIA. VIA would continue to produce the MII throughout the early 2000s. It was expected to be discontinued when the VIA Cyrix MII was released. However, the MII was still available for sale until mid/late 2003, being shown on VIA's website as a product until October, and it still saw use in devices such as network computers. Architecture. The 6x86 is superscalar and superpipelined and performs register renaming, speculative execution, out-of-order execution, and data dependency removal. However, it continued to use native x86 execution and ordinary microcode only, like Centaur's Winchip, unlike competitors Intel and AMD which introduced the method of dynamic translation to micro-operations with Pentium Pro and K5. The 6x86 is socket-compatible with the Intel P54C Pentium, and was offered in six performance levels: PR 90+, PR 120+, PR 133+, PR 150+, PR 166+ and PR 200+. These performance levels do not map to the clock speed of the chip itself (for example, a PR 133+ ran at 110 MHz, a PR 166+ ran at 133 MHz, etc.). With regard to internal caches, it has a 16-KB primary cache and a fully associative 256-byte instruction line cache is included alongside the primary cache, which functions as the primary instruction cache. The 6x86 and 6x86L were not completely compatible with the Intel P5 Pentium instruction set and are not multi-processor capable. For this reason, the chip identified itself as an 80486 and disabled the CPUID instruction by default. CPUID support could be enabled by first enabling extended CCR registers then setting bit 7 in CCR4. The lack of full P5 Pentium compatibility caused problems with some applications because programmers had begun to use P5 Pentium-specific instructions. Some companies released patches for their products to make them function on the 6x86. Compatibility with the Pentium was improved in the 6x86MX, by adding a Time Stamp Counter to support the P5 Pentium's RDTSC instruction. Support for the Pentium Pro's CMOVcc instructions were also added. Performance. Similarly to AMD with their K5 and early K6 processors, Cyrix used a PR rating (Performance Rating) to relate their performance to the Intel P5 Pentium (pre-P55C), as the 6x86's higher per-clock performance relative to a P5 Pentium could be quantified against a higher-clocked Pentium part. For example, a 133 MHz 6x86 will match or outperform a P5 Pentium at 166 MHz, and as a result Cyrix could market the 133 MHz chip as being a P5 Pentium 166's equal. However, the PR rating was not an entirely truthful representation of the 6x86's performance. While the 6x86's integer performance was significantly higher than P5 Pentium's, its floating point performance was more mediocre—between 2 and 4 times the performance of the 486 FPU per clock cycle (depending on the operation and precision). The FPU in the 6x86 was largely the same circuitry that was developed for Cyrix's earlier high performance 8087/80287/80387-compatible coprocessors, which was very fast for its time—the Cyrix FPU was much faster than the 80387, and even the 80486 FPU. However, it was still considerably slower than the new and completely redesigned P5 Pentium and P6 Pentium Pro-Pentium III FPUs. One of the main features of the P5/P6 FPUs is that they supported interleaving of FPU and integer instructions in their design, which Cyrix chips did not integrate. This caused very poor performance with Cyrix CPUs on games and software that took advantage of this. Therefore, despite being very fast clock by clock, the 6x86 and MII were forced to compete at the low-end of the market as AMD K6 and Intel P6 Pentium II were always ahead on clock speed. The 6x86's and MII's old generation "486 class" floating point unit combined with an integer section that was at best on-par with the newer P6 and K6 chips meant that Cyrix could no longer compete in performance. Models and variants. 6x86. The "6x86" (codename M1) was released by Cyrix in 1996. The first generation of 6x86 had heat problems. This was primarily caused by their higher heat output than other x86 CPUs of the day and, as such, computer builders sometimes did not equip them with adequate cooling. The CPUs topped out at around 25 W heat output (like the AMD K6), whereas the P5 Pentium produced around 15 W of waste heat at its peak. However, both numbers would be a fraction of the heat generated by many high performance processors, some years later. Shortly after the original M1, the M1R was released. The M1R was a switch from SGS-Thomson 3M process to IBM 5M process, making the 6x86 chips 50% smaller. 6x86L. The 6x86L (codename M1L) was later released by Cyrix to address heat issues; the "L" standing for "low-power". Improved manufacturing technologies permitted usage of a lower Vcore. Just like the Pentium MMX, the 6x86L required a split power plane voltage regulator with separate voltages for I/O and CPU core. 6x86MX / MII. Another release of the 6x86, the 6x86MX, added MMX compatibility along with the EMMI instruction set, improved compatibility with the Pentium and Pentium Pro by adding a Time Stamp Counter and CMOVcc instructions respectively, and quadrupled the primary cache size to 64 KB. The 256-byte instruction line cache can be turned into a scratchpad cache to provide support for multimedia operations. Later revisions of this chip were renamed MII, to better compete with the Pentium II processor. 6x86MX / MII was late to market, and couldn't scale well in clock speed with the manufacturing processes used at the time.
6888
49145446
https://en.wikipedia.org/wiki?curid=6888
Colon classification
Colon classification (CC) is a library catalogue system developed by Shiyali Ramamrita Ranganathan. It was an early faceted (or analytico-synthetic) classification system. The first edition of colon classification was published in 1933, followed by six more editions. It is especially used in libraries in India. Its name originates from its use of colons to separate facets into classes. Many other classification schemes, some of which are unrelated, also use colons and other punctuation to perform various functions. Originally, CC used only the colon as a separator, but since the second edition, CC has used four other punctuation symbols to identify each facet type. In CC, facets describe "personality" (the most specific subject), matter, energy, space, and time (PMEST). These facets are generally associated with every item in a library, and thus form a reasonably universal sorting system. As an example, the subject "research in the cure of tuberculosis of lungs by x-ray conducted in India in 1950" would be categorized as: This is summarized in a specific call number: Organization. The colon classification system uses 42 main classes that are combined with other letters, numbers, and marks in a manner resembling the Library of Congress Classification. Facets. CC uses five primary categories, or facets, to specify the sorting of a publication. Collectively, they are called "PMEST": Other symbols can be used to indicate components of facets called isolates, and to specify complex combinations or relationships between disciplines. Classes. The following are the main classes of CC, with some subclasses, the main method used to sort the subclass using the PMEST scheme and examples showing application of PMEST. Example. A common example of the colon classification is:
6889
46360563
https://en.wikipedia.org/wiki?curid=6889
Census
A census (from Latin "censere", 'to assess') is the procedure of systematically acquiring, recording, and calculating information about the members of a given population, which are then usually displayed through statistics. This term is used mostly in connection with national population and housing censuses; other common censuses include censuses of agriculture, traditional culture, business, supplies, and traffic censuses. The United Nations (UN) defines the essential features of population and housing censuses as "individual enumeration, universality within a defined territory, simultaneity and defined periodicity", and recommends that population censuses be taken at least every ten years. UN recommendations also cover census topics to be collected, official definitions, classifications, and other useful information to coordinate international practices. The UN's Food and Agriculture Organization (FAO), in turn, defines the census of agriculture as "a statistical operation for collecting, processing and disseminating data on the structure of agriculture, covering the whole or a significant part of a country." "In a census of agriculture, data are collected at the holding level." The word is of Latin origin: during the Roman Republic, the census was a list of all adult males fit for military service. The modern census is essential to international comparisons of any type of statistics, and censuses collect data on many attributes of a population, not just the number of individuals. Censuses typically began as the only method of collecting national demographic data and are now part of a larger system of different surveys. Although population estimates remain an important function of a census, including exactly the geographic distribution of the population or the agricultural population, statistics can be produced about combinations of attributes, e.g., education by age and sex in different regions. Current administrative data systems allow for other approaches to enumeration with the same level of detail but raise concerns about privacy and the possibility of biasing estimates. A census can be contrasted with sampling in which information is obtained only from a subset of a population; typically, main population estimates are updated by such intercensal estimates. Modern census data are commonly used for research, business marketing, and planning, and as a baseline for designing sample surveys by providing a sampling frame such as an address register. Census counts are necessary to adjust samples to be representative of a population by weighting them as is common in opinion polling. Similarly, stratification requires knowledge of the relative sizes of different population strata, which can be derived from census enumerations. In some countries, the census provides the official counts used to apportion the number of elected representatives to regions (sometimes controversially – e.g., "Utah v. Evans"). In many cases, a carefully chosen random sample can provide more accurate information than attempts to get a population census. History. Iran. One of the earliest systematic censuses in world history was conducted during the early Achaemenid period, up until the reign of Darius The Great in Ancient Iran. This census, aimed at financial planning, military organization, and tax collection, spanned regions across three continents: Asia, Africa, and Europe. It included data on population numbers, the wealth of cities and provinces (Satrapies), precise assessments of agricultural lands, the resources of each region, and other factors critical to determining state finances and planning for governance and military operations. In modern Iran, the first nationwide population and housing census was conducted in 1956 (1335 in the Iranian calendar), with the most recent one completed in 2016 (1395). According to Article 4 of the Iranian Statistical Center Law, this nationwide census is to be carried out every five years by order of the president. Egypt. The earliest Egyptian census was the cattle count, which counted not people but livestock (especially but not exclusively cows) for taxation purposes. During the early Old Kingdom it was taken every two years; the frequency increased over time. Human censuses in Egypt first appeared in the late Middle Kingdom and developed in the New Kingdom. Herodotus wrote that Ahmose I, first monarch of the New Kingdom, required every Egyptian to declare annually to the nomarch, "whence he gained his living". Under the Ptolemies and the Romans several censuses were conducted in Egypt by government officials. Ancient Greece. There are several accounts of ancient Greek city states carrying out censuses. Israel. Censuses are mentioned several times in the Biblical narrative. God commands a per capita tax to be paid with the census for the upkeep of the Tabernacle. The Book of Numbers is named after the counting of the Israelite population according to the house of the Fathers after the exodus from Egypt. A second census was taken while the Israelites were camped in the "plains of Moab". King David performed a census that produced disastrous results. His son, King Solomon, had all of the foreigners in Israel counted. China. One of the world's earliest preserved censuses was held in China in AD2 during the Han dynasty, and is still considered by scholars to be quite accurate. The population was registered as having 57,671,400 individuals in 12,366,470 households but on this occasion only taxable families had been taken into account, indicating the income and the number of soldiers who could be mobilized. Another census was held in AD144. India. The oldest recorded census in India is thought to have occurred around 330BC during the reign of Emperor Chandragupta Maurya under the leadership of Chanakya and Ashoka. Rome. The English term is taken directly from the Latin "census", from "" ("to estimate"). The census played a crucial role in the administration of the Roman government, as it was used to determine the class a citizen belonged to for both military and tax purposes. Beginning in the middle republic, it was usually carried out every five years. It provided a register of citizens and their property from which their duties and privileges could be listed. It is said to have been instituted by the Roman king Servius Tullius in the at which time the number of arms-bearing citizens was supposedly counted at around 80,000. When the Romans conquered Judea in AD6, the legate Publius Sulpicius Quirinius organized a census for tax purposes, which was partially responsible for the development of the Zealot movement and several failed rebellions against Rome ultimately ending in the Jewish Diaspora. The Gospel of Luke makes reference to Quirinius' census in relation to the birth of Jesus; based on variant readings of this passage, a minority of biblical scholars, including N. T. Wright, speculate that this passage refers to a separate registration conducted during the reign of Herod the Great, several years before Quirinius' census. The 15-year indiction cycle established by Diocletian in AD297 was based on quindecennial censuses and formed the basis for dating in late antiquity and under the Byzantine Empire. Rashidun and Umayyad Caliphates. In the Middle Ages, the Caliphate began conducting regular censuses soon after its formation, beginning with the one ordered by the second Rashidun caliph, Umar. Medieval Europe. The Domesday Book was undertaken in AD1086 by William I of England so that he could properly tax the land he had recently conquered. In 1183, a census was taken of the crusader Kingdom of Jerusalem, to ascertain the number of men and amount of money that could possibly be raised against an invasion by Saladin, sultan of Egypt and Syria. The first national census of France () was undertaken in 1328, mostly for fiscal purposes. It estimated the French population at 16 to 17 million. Inca Empire. In the 15th century, the Inca Empire had a unique way to record census information. The Incas did not have any written language but recorded information collected during censuses and other numeric information as well as non-numeric data on quipus, strings from llama or alpaca hair or cotton cords with numeric and other values encoded by knots in a base-10 positional system. Spanish Empire. On May 25, 1577, King Philip II of Spain ordered by royal cédula the preparation of a general description of Spain's holdings in the Indies. Instructions and a questionnaire, issued in 1577 by the Office of the Cronista Mayor, were distributed to local officials in the Viceroyalties of New Spain and Peru to direct the gathering of information. The questionnaire, composed of fifty items, was designed to elicit basic information about the nature of the land and the life of its peoples. The replies, known as "", were written between 1579 and 1585 and were returned to the Cronista Mayor in Spain by the Council of the Indies. Sampling. A census is often construed as the opposite of a sample as it intends to count everyone in a population, rather than a fraction. However, population censuses do rely on a sampling frame to count the population. This is the only way to be sure that everyone has been included, as otherwise those not responding would not be followed up on and individuals could be missed. The fundamental premise of a census is that the population is not known, and a new estimate is to be made by the analysis of primary data. The use of a sampling frame is counterintuitive as it suggests that the population size is already known. However, a census is also used to collect attribute data on the individuals in the nation, not only to assess population size. This process of sampling marks the difference between a historical census, which was a house-to-house process or the product of an imperial decree, and the modern statistical project. The sampling frame used by a census is almost always an address register. Thus, it is not known if there are any residents or how many people there are in each household. Depending on the mode of enumeration, a form is sent to the householder, an enumerator calls, or administrative records for the dwelling are accessed. As a preliminary to the dispatch of forms, census workers will check for any address problems on the ground. While it may seem straightforward to use the postal service file for this purpose, this can be out of date and some dwellings may contain several independent households. A particular problem is what is termed "communal establishments", a category that includes student residences, religious orders, homes for the elderly, people in prisons, etc. As these are not easily enumerated by a single householder, they are often treated differently and visited by special teams of census workers to ensure they are classified appropriately. Residence definitions. Individuals are normally counted within households, and information is typically collected about the household structure and the housing. For this reason, international documents refer to censuses of population and housing. Normally the census response is made by a household, indicating details of individuals resident there. An important aspect of census enumerations is determining which individuals can be counted and which cannot be counted. Broadly, three definitions can be used: "de facto" residence; "de jure" residence; and permanent residence. This is important in considering individuals who have multiple or temporary addresses. Every person should be identified uniquely as a resident in one place; but the place where they happen to be on census day, their "de facto" residence, may not be the best place to count them. Where an individual uses services may be more useful, and this is at their usual residence. An individual may be recorded at a "permanent" address, which might be a family home for students or long-term migrants. A precise definition of residence is needed, to decide whether visitors to a country should be included in the population count. This is becoming more important as students travel abroad for education for a period of several years. Other groups causing problems with enumeration are newborn babies, refugees, people away on holiday, people moving home around census day, and people without a fixed address. People with second homes, because they are working in another part of the country or have a holiday cottage, are difficult to fix at a particular address; this sometimes causes double counting or houses being mistakenly identified as vacant. Another problem is where people use a different address at different times e.g. students living at their place of education in term time but returning to a family home during vacations, or children whose parents have separated who effectively have two family homes. Census enumeration has always been based on finding people where they live, as there is no systematic alternative: any list used to find people is likely to be derived from census activities in the first place. Recent UN guidelines provide recommendations on enumerating such complex households. In the census of agriculture, data is collected at the agricultural holding unit. An agricultural holding is an economic unit of agricultural production under single management comprising all livestock kept and all land used wholly or partly for agricultural production purposes, without regard to title, legal form, or size. Single management may be exercised by an individual or household, jointly by two or more individuals or households, by a clan or tribe, or by a juridical person such as a corporation, cooperative, or government agency. The holding's land may consist of one or more parcels, located in one or more separate areas or one or more territorial or administrative divisions, providing the parcels share the same production means, such as labor, farm buildings, machinery or draught animals. Enumeration strategies. Historical censuses used direct field enumeration and assumed that the information collected was fully accurate, with no measurement error. Modern approaches take into account the problems of overcount and undercount and the coherence of census enumerations with other official sources of data. For instance, during the 2020 U.S. Census, the Census Bureau counted people primarily by collecting answers sent by mail, on the internet, over the phone, or using shared information through proxies. These methods accounted for 95.5 percent of all occupied housing units in the United States. This reflects a realist approach to measurement, acknowledging that under any definition of residence there is a true value of the population but this can never be measured with complete accuracy. An important aspect of the census process is to evaluate the quality of the data. Many countries use a post-enumeration survey to adjust the raw census counts. This works similarly to capture-recapture estimation for animal populations. Among census experts, this method is called dual system enumeration (DSE). A sample of households is visited by interviewers who record the details of the household as of census day. These data are then matched to census records, and the number of people missed can be estimated by considering the number of people who are included in one count but not the other. This allows adjustments to the count for non-response, varying between different demographic groups. An explanation using a fishing analogy can be found in "Trout, Catfish and Roach..." which won an award from the Royal Statistical Society for excellence in official statistics in 2011. Triple system enumeration has been proposed as an improvement as it would allow evaluation of the statistical dependence of pairs of sources. However, as the matching process is the most difficult aspect of census estimation this has never been implemented for a national enumeration. It would also be difficult to identify three different sources that were sufficiently different to make the triple system effort worthwhile. The DSE approach has another weakness in that it assumes there is no person counted twice (over count). In "de facto" residence definitions this would not be a problem but in "de jure" definitions individuals risk being recorded on more than one form leading to double counting. A particular problem here is students who often have a term time and family address. Several countries have used a system known as short form/long form. This is a sampling strategy that randomly chooses a proportion of people to send a more detailed questionnaire to (the long form). Everyone receives the short-form questions. This means more data are collected, but without imposing a burden on the whole population. This also reduces the burden on the statistical office. Indeed, in the UK until 2001 all residents were required to fill in the whole form but only a 10% sample was coded and analysed in detail. New technology means that all data are now scanned and processed. During the 2011 Canadian census there was controversy about the cessation of the mandatory long-form census; the head of Statistics Canada, Munir Sheikh, resigned upon the federal government's decision to do so. The use of alternative enumeration strategies is increasing but these are not as simple as many people assume and are only used in developed countries. The Netherlands has been most advanced in adopting a census using administrative data. This allows a simulated census to be conducted by linking several different administrative databases at an agreed time. Data can be matched, and an overall enumeration established allowing for discrepancies between different data sources. A validation survey is still conducted in a similar way to the post-enumeration survey employed in a traditional census. Other countries that have a population register use this as a basis for all the census statistics needed by users. This is most common among Nordic countries but requires many distinct registers to be combined, including population, housing, employment, and education. These registers are then combined and brought up to the standard of a statistical register by comparing the data from different sources and ensuring the quality is sufficient for official statistics to be produced. A recent innovation is the French instigation of a rolling census program with different regions enumerated each year so that the whole country is completely enumerated every 5 to 10 years. In Europe, in connection with the 2010 census round, many countries adopted alternative census methodol\\ogies, often based on the combination of data from registers, surveys and other sources. Technology. Censuses have evolved in their use of technology; censuses in 2010 used many types of computing. In Brazil, handheld devices were used by enumerators to locate residences on the ground. In many countries, census returns could be made via the Internet as well as in paper form. DSE is facilitated by computer matching techniques that can be automated, such as propensity score matching. In the UK, all census formats are scanned and stored electronically before being destroyed, replacing the need for physical archives. The record linking to perform an administrative census would not be possible without large databases being stored on computer systems. There are sometimes problems in introducing new technology. The US census had been intended to use handheld computers, but cost escalated, and this was abandoned, with the contract being sold to Brazil. The online response has some advantages, but one of the functions of the census is to make sure everyone is counted accurately. A system that allowed people to enter their address without verification would be open to abuse. Therefore, households have to be verified on the ground, typically by an enumerator visit or post out. Paper forms are still necessary for those without access to the internet. It is also possible that the hidden nature of an administrative census means that users are not engaged with the importance of contributing their data to official statistics. Alternatively, population estimations may be carried out remotely with geographic information system (GIS) and remote sensing technologies. Development. According to the United Nations Population Fund (UNFPA), "The information generated by a population and housing census – numbers of people, their distribution, their living conditions and other key data – is critical for development." This is because this type of data is essential for policymakers so that they know where to invest. Many countries have outdated or inaccurate data about their populations and thus have difficulty in addressing the needs of the population. The UNFPA said: "The unique advantage of the census is that it represents the entire statistical universe, down to the smallest geographical units, of a country or region. Planners need this information for all kinds of development work, including: assessing demographic trends; analysing socio-economic conditions; designing evidence-based poverty-reduction strategies; monitoring and evaluating the effectiveness of policies; and tracking progress toward national and internationally agreed development goals." In addition to making policymakers aware of population issues, the census is also an important tool for identifying forms of social, demographic or economic exclusions, such as inequalities relating to race, ethics, and religion as well as disadvantaged groups such as those with disabilities and the poor. An accurate census can empower local communities by providing them with the necessary information to participate in local decision-making and ensuring they are represented. The importance of the census of agriculture for development is that it gives a snapshot of the structure of the agricultural sector in a country and, when compared with previous censuses, provides an opportunity to identify trends and structural transformations of the sector, and points towards areas for policy intervention. Census data are used as a benchmark for current statistics and their value is increased when they are employed together with other data sources. Uses of data. Early censuses in the 19th and 20th centuries collected paper documents which had to be collated by hand, so the statistical information obtained was quite basic. The government that owned the data could publish statistics on the state of the nation. The results were used to measure changes in the population and apportion representation. Population estimates could be compared to those of other countries. By the beginning of the 20th century, censuses were recording households and some indications of their employment. In some countries, census archives are released for public examination after many decades, allowing genealogists to track the ancestry of interested people. Archives provide a substantial historical record which may challenge established views. Information such as job titles and arrangements for the destitute and sick may also shed light on the historical structure of society. Political considerations influence the census in many countries. In Canada in 2010 for example, the government under the leadership of Stephen Harper abolished the mandatory long-form census. This abolition was a response to protests from some Canadians who resented the personal questions. The long-form census was reinstated by the Justin Trudeau government in 2016. Research. As governments assumed responsibility for schooling and welfare, large government research departments made extensive use of census data. Population projections could be made, to help plan for provision in local government and regions. Central government could also use census data to allocate funding. Even in the mid 20th century, census data was only directly accessible to large government departments. However, computers meant that tabulations could be used directly by university researchers, large businesses and local government offices. They could use the detail of the data to answer new questions and add to local and specialist knowledge. Nowadays, census data are published in a wide variety of formats to be accessible to business, all levels of government, media, students and teachers, charities, and any citizen who is interested; researchers in particular have an interest in the role of Census Field Officers (CFO) and their assistants. Data can be represented visually or analysed in complex statistical models, to show the difference between certain areas, or to understand the association between different personal characteristics. Census data offer a unique insight into small areas and small demographic groups which sample data would be unable to capture with precision. In the census of agriculture, users need census data to: Privacy and data stewardship. Although the census provides useful statistical information about a population, the availability of this information could sometimes lead to abuses, political or otherwise, by the linking of individuals' identities to anonymous census data. This is particularly important when individuals' census responses are made available in microdata form, but even aggregate-level data can result in privacy breaches when dealing with small areas and/or rare subpopulations. For instance, when reporting data from a large city, it might be appropriate to give the average income for black males aged between 50 and 60. However, doing this for a town that only has two black males in this age group would be a breach of privacy because either of those persons, knowing his own income and the reported average, could determine the other man's income. Typically, census data are processed to obscure such individual information. Some agencies do this by intentionally introducing small statistical errors to prevent the identification of individuals in marginal populations; others swap variables for similar respondents. Whatever is done to reduce the privacy risk, new improved electronic analysis of data can threaten to reveal sensitive individual information. This is known as statistical disclosure control. Another possibility is to present survey results by means of statistical models in the form of a multivariate distribution mixture. The statistical information in the form of conditional distributions (histograms) can be derived interactively from the estimated mixture model without any further access to the original database. As the final product does not contain any protected microdata, the model-based interactive software can be distributed without any confidentiality concerns. Another method is simply to release no data at all, except very large scale data directly to the central government. Differing release strategies of governments have led to an international project (IPUMS) to co-ordinate access to microdata and corresponding metadata. Such projects such as SDMX also promote standardising metadata, so that best use can be made of the minimal data available. Boycotts. Censuses have sometimes been the subject of threatened or realized boycotts. Political historian Laurence Cooley categorizes census boycotts into those 'motivated by concerns specifically related to the census itself' and those where 'the rationale for boycotting is not to influence the questionnaire design or the enumeration process, but rather in pursuit of aims that are incidental to the census'. Boycotts of the first kind include Kenya in 2009, when some ethnic groups threatened to boycott the census if they were not allocated their own categories, and Myanmar in 2014, when nationalists threatened a boycott if the Rohingya were allowed to self-identify. Examples of boycotts where the census was more of a symbolic target include the 1911 UK census, which suffrage organizations boycotted to protest against women's lack of voting rights, using the slogan 'no vote, no census'. Other prominent cases of census boycotts include West Germany in 1983 and 1987. World population estimates. The earliest world population estimate of the world population was made by Giovanni Battista Riccioli in 1661; the next by Johann Peter Süssmilch in 1741, revised in 1762; the third by Karl Friedrich Wilhelm Dieterici in 1859. In 1931, Walter Willcox published a table in his book, "International Migrations: Volume II Interpretations", that estimated the 1929 world population to be roughly 1.8 billion. Impact of COVID-19. Impact. The UNFPA predicted that the COVID-19 pandemic would threaten the successful conduct of censuses of population and housing in many countries through delays, interruptions that compromise quality, or complete cancellation of census projects. Domestic and donor financing for census were diverted to address COVID-19 leaving census without crucial funds. Several countries chose to postpone the census. The pandemic also affected the planning and implementation of censuses of agriculture across the world. The extent of the impact varied according to what stage the censuses were at, ranging from planning (i.e. staffing, procurement, preparation of frames, questionnaires), fieldwork (field training and enumeration) or data processing/analysis stages. The census of agriculture's reference period is the agricultural year. Thus, a delay in any census activity may be critical and can result in a full year postponement of the enumeration if the agricultural season is missed. Some publications have discussed the impact of COVID-19 on national censuses of agriculture. Adaptation. The United Nations Population Fund (UNFPA) requested a global effort to assure that even where a census was delayed, census planning and preparations were not cancelled, but continued in order to assure that implementation could proceed safely once the pandemic was under control. While new census methods, including online, register-based, and hybrid approaches were being used across the world, these demanded extensive planning and preconditions that could not be created at short notice. The low supply of personal protective equipment to protect against COVID-19 had immediate implications for conducting censuses in communities at risk of transmission. The UNFPA Procurement Office partnered with other agencies to explore new supply chains and resources.
6896
33842945
https://en.wikipedia.org/wiki?curid=6896
Outline of chemistry
The following outline acts as an overview of and topical guide to chemistry: Chemistry is the science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate to the chemical reactions. Chemistry is centrally concerned with atoms and their interactions with other atoms, and particularly with the properties of chemical bonds. Summary. Chemistry can be described as all of the following: Branches. Other History. History of chemistry Atomic theory. Atomic theory Thermochemistry. Thermochemistry "For more chemists, see: Nobel Prize in Chemistry and List of chemists"
6901
28481209
https://en.wikipedia.org/wiki?curid=6901
Outline of critical theory
The following outline is provided as an overview of and topical guide to critical theory: Critical theory – the examination and critique of society and culture, drawing from knowledge across the social sciences and humanities. The term has two different meanings with different origins and histories: one originating in sociology and the other in literary criticism. This has led to the very literal use of 'critical theory' as an umbrella term to describe any theory founded upon critique. The term "Critical Theory" was first coined by Max Horkheimer in his 1937 essay "Traditional and Critical Theory".
6902
1301300380
https://en.wikipedia.org/wiki?curid=6902
Cotswolds
The Cotswolds ( ) is a region of South West England, along a range of wolds or rolling hills that rise from the meadows of the upper River Thames to an escarpment above the Severn Valley and the Vale of Evesham. The area is defined by the bedrock of Jurassic limestone that creates a type of grassland habitat that is quarried for the golden-coloured Cotswold stone. It lies across the boundaries of several English counties: mainly Gloucestershire and Oxfordshire, and parts of Wiltshire, Somerset, Worcestershire, and Warwickshire. The highest point is Cleeve Hill at , just east of Cheltenham. The predominantly rural landscape contains stone-built villages, towns, stately homes and gardens featuring the local stone. A large area within the Cotswolds has been designated as a National Landscape (formerly known as Area of Outstanding Natural Beauty, or AONB) since 1966. The designation covers , with boundaries roughly across and long, stretching south-west from just south of Stratford-upon-Avon to just south of Bath, making it the largest National Landscape area and England's third-largest protected landscape. The Cotswold local government district is within Gloucestershire. Its main town is Cirencester. In 2021, the population of the district was 91,000. The much larger area referred to as the Cotswolds encompasses nearly . The population of the National Landscape area was 139,000 in 2016. History. The largest excavation of Jurassic period echinoderm fossils, including of rare and previously unknown species, occurred at a quarry in the Cotswolds in 2021. There is evidence of Neolithic settlement from burial chambers on Cotswold Edge, and there are remains of Bronze and Iron Age forts. Later the Romans built villas, such as at Chedworth, settlements such as Gloucester, and paved the Celtic path later known as Fosse Way. During the Middle Ages, thanks to the breed of sheep known as the Cotswold Lion, the Cotswolds became prosperous from the wool trade with the continent, with much of the money made from wool directed towards the building of churches. The most successful era for the wool trade was 1250–1350; much of the wool at that time was sold to Italian merchants. The area still preserves numerous large, handsome Cotswold Stone "wool churches". The affluent area in the 21st century has attracted wealthy Londoners and others who own second homes there or have chosen to retire to the Cotswolds. Etymology. The name "Cotswold" is popularly believed to mean the "sheep enclosure in rolling hillsides", incorporating the term "wold", meaning "forested hills", from the Anglian dialect term of Old English — cognate with the Weald, "forest", from the West Saxon dialect term of Old English. But for many years the English Place-Name Society has accepted that the term "Cotswold" is derived from "Codesuualt" of the 12th century or other variations on this form, the etymology of which is "Cod's-wold", meaning "Cod's high open land". "Cod" was interpreted as an Old English personal name, which may be recognised in further names: Cutsdean, Codeswellan, and Codesbyrig, some of which date to the 8th century. It has subsequently been noticed that "Cod" could derive philologically from a Brittonic female cognate "Cuda", a hypothetical mother goddess in Celtic mythology postulated to have been worshipped in the Cotswold region. Geography. The Cotswolds' spine runs southwest to northeast through six counties, particularly Gloucestershire, west Oxfordshire, and southwestern Warwickshire. The Cotswolds' northern and western edges are marked by steep escarpments down to the Severn valley and the Warwickshire Avon. This feature, known as the Cotswold escarpment or the Cotswold Edge, is a result of the uplifting (tilting) of the limestone layer, exposing its broken edge. This is a cuesta, in geological terms. The dip slope is to the southeast. On the eastern boundary lies the city of Oxford and on the west is Stroud. To the southeast, the upper reaches of the Thames Valley and towns such as Lechlade, Tetbury, and Fairford are often considered to mark the limit of the region. To the south the Cotswolds, with the characteristic uplift of the Cotswold Edge, reach beyond Bath, and towns such as Chipping Sodbury and Marshfield share elements of Cotswold character. The area is characterised by attractive small towns and villages built of the underlying Cotswold stone (a yellow oolitic limestone). This limestone is rich in fossils, particularly of fossilised sea urchins. Cotswold towns include Bourton-on-the-Water, Broadway, Chalford, Charlbury, Chipping Campden, Chipping Norton, Cricklade, Dursley, Malmesbury, Minchinhampton, Moreton-in-Marsh, Nailsworth, Northleach, Painswick, Stow-on-the-Wold, Stroud, Tetbury, Witney, Winchcombe and Wotton-under-Edge. In addition, much of Box lies in the Cotswolds. Bath, Cheltenham, Cirencester, Gloucester, Stroud, and Swindon are larger urban centres that border on, or are virtually surrounded by, the Cotswold AONB. Chipping Campden is notable as the home of the Arts and Crafts movement, founded by William Morris at the end of the 19th and beginning of the 20th centuries. Morris lived occasionally in Broadway Tower, a folly, now part of a country park. Chipping Campden is also known for the annual Cotswold Olimpick Games, a celebration of sports and games dating to the early 17th century. Of the Cotswolds' nearly , roughly 80 per cent is farmland. There are over of footpaths and bridleways, and of historic stone walls. The Cotswolds limestones form part of a range of sedimentary rocks deposited in the Middle Jurassic period, the Great Oolite Group and the Inferior Oolite Group. They run between Dorset on the English Channel coast and Scarborough on the Yorkshire coast of the North Sea. Although more famous for their limestone lithologies, they also contain sandstones and mudstones. Within the Cotswolds area, the Great Oolite Group contains limestones formations such as: Cornbrash, White Limestone and Athelstan Oolite. In this area, the Inferior Oolite Group contains limestones such as Birdlip Limestone, Aston Limestone and Salperton Limestone formations. In the East Midlands, the Inferior Oolite Group contains Lincolnshire Limestone (plus Northampton Sandstones containing Ironstone that were quarried for the steelworks at Scunthorpe and Corby). In the southwest of England, the Ham Hill Limestone Member of the Bridport Sand Formation is a honey-coloured limestone reminiscent of the northern Cotswolds limestones. Such areas are sometimes referred to as the Notswolds due to their similarity with the Cotswolds. Economy. A 2017 report on employment within the Area of Outstanding Natural Beauty stated that the main sources of income were real estate, renting and business activities, manufacturing, and wholesale & retail trade repairs. Some 44% of residents were employed in these sectors. Agriculture is also important; 86% of the land in the AONB is used for this purpose. The primary crops include barley, beans, rapeseed and wheat, while the raising of sheep is also important; cows and pigs are also reared. The livestock sector has been declining since 2002. According to 2011 census data for the Cotswolds, the wholesale and retail trade was the largest employer (15.8% of the workforce), followed by education (9.7%) and health and social work (9.3%). The report also indicates that a relatively higher proportion of residents worked in agriculture, forestry and fishing, accommodation and food services, as well as in professional, scientific, and technical activities. Unemployment in the Cotswold District was among the lowest in the country. An August 2017 report showed only 315 unemployed persons, a decrease of five from a year earlier. Tourism. Tourism is a significant part of the economy. The Cotswold District area gained over £373 million from visitor spending on accommodation, £157 million on local attractions and entertainments, and about £100m on travel in 2016. In the larger Cotswolds Tourism area, including Stroud, Cheltenham, Gloucester and Tewkesbury, tourism generated about £1 billion in 2016, providing 200,000 jobs. Some 38 million day visits were made to the Cotswold Tourism area that year. Many travel guides direct tourists to Chipping Campden, Stow-on-the-Wold, Bourton-on-the-Water, Broadway, Bibury, and Stanton. Some of these locations can be very crowded at times. Roughly 300,000 people visit Bourton per year, for example, with about half staying for a day or less. The area also has numerous public walking trails and footpaths that attract visitors, including the Cotswold Way (part of the National Trails system) from Bath to Chipping Campden. Housing development. In August 2018, the final decision was made for a Local Plan that would lead to the building of nearly 7,000 additional homes by 2031, in addition to over 3,000 already built. Areas for development include Cirencester, Bourton-on-the-Water, Down Ampney, Fairford, Kemble, Lechlade, Northleach, South Cerney, Stow-on-the-Wold, Tetbury and Moreton-in-Marsh. Some of the money received from developers will be earmarked for new infrastructure to support the increasing population. Cotswold stone. Cotswold stone is a yellow oolitic Jurassic limestone. This limestone is rich in fossils, particularly of fossilised sea urchins. When weathered, the colour of buildings made or faced with this stone is often described as honey or golden. The stone varies in colour from north to south, being honey-coloured in the north and northeast, as in villages such as Stanton and Broadway; golden-coloured in the central and southern areas, as in Dursley and Cirencester; and pearly white in Bath. The rock outcrops at places on the Cotswold Edge; small quarries are common. The exposures are rarely sufficiently compact to be good for rock-climbing, but an exception is Castle Rock, on Cleeve Hill, near Cheltenham. In his 1934 book "English Journey", J. B. Priestley wrote of Cotswold buildings made of the local stone. He said: "The truth is that it has no colour that can be described. Even when the sun is obscured and the light is cold, these walls are still faintly warm and luminous, as if they knew the trick of keeping the lost sunlight of centuries glimmering about them." Cotswolds National Landscape. The term "Cotswolds National Landscape" was adopted in September 2020, using a proposed name replacement for Areas of Outstanding Natural Beauty (AONB). All AONBs in England and Wales were re-branded as "National Landscapes" in November 2023, although (as of 2024) the legal name and designation remains "Area of Outstanding Natural Beauty" under the Countryside and Rights of Way Act 2000, amending the National Parks and Access to the Countryside Act 1949. The term AONB is still used in this section. The Cotswolds National Landscape area (formerly the Cotwolds AONB) was originally designated as an Area of Outstanding Natural Beauty (AONB) in 1966, with an expansion on 21 December 1990 to . In 1991, all AONBs were measured again using modern methods, and the official area of the Cotswolds AONB was increased to . In 2000, the government confirmed that AONBs have the same landscape quality and status as National Parks. It is England's third-largest protected landscape, after the Lake District and Yorkshire Dales national parks. The Cotswolds National Landscape, which is the largest in England and Wales, stretches from the border regions of South Warwickshire and Worcestershire, through West Oxfordshire and Gloucestershire, and takes in parts of Wiltshire and of Bath and North East Somerset in the south. Gloucestershire County Council is responsible for sixty-three per cent of the AONB. The Cotswolds Conservation Board has the task of conserving and enhancing the AONB. Established under statute in 2004 as an independent public body, the Board carries out a range of work from securing funding for 'on the ground' conservation projects, to providing a strategic overview of the area for key decision makers, such as planning officials. The Board is funded by Natural England and the seventeen local authorities that are covered by the AONB. The Cotswolds AONB Management Plan 2018–2023 was adopted by the Board in September 2018. The landscape of the AONB is varied, including escarpment outliers, escarpments, rolling hills and valleys, enclosed limestone valleys, settled valleys, ironstone hills and valleys, high wolds and high wold valleys, high wold dip-slopes, dip-slope lowland and valleys, a Low limestone plateau, cornbrash lowlands, farmed slopes, a broad floodplain valley, a large pastoral lowland vale, a settled unwooded vale, and an unwooded vale. While the beauty of the Cotswolds AONB is intertwined with that of the villages that seem almost to grow out of the landscape, the Cotswolds were primarily designated an Area of Outstanding Natural Beauty for the rare limestone grassland habitats as well as the old growth beech woodlands that typify the area. These habitat areas are also the last refuge for many other flora and fauna, with some so endangered that they are protected under the Wildlife and Countryside Act 1981. Cleeve Hill, and its associated commons, is a fine example of a limestone grassland and it is one of the few locations where the Duke of Burgundy butterfly may still be found in abundance. A June 2018 report stated that the AONB receives "23 million visitors a year, the third largest of any protected landscape". Earlier that year, Environment secretary Michael Gove announced that a panel would be formed to consider making some of the AONBs into National Parks. The review will file its report in 2019. In April 2018, the Cotswolds Conservation Board had written to Natural England "requesting that consideration be given to making the Cotswolds a National Park", according to Liz Eyre, chairman. This has led to some concern; one member of the Cotswold District Council said, "National Park designation is a significant step further and raises the prospect of key decision making powers being taken away from democratically elected councillors". In other words, Cotswold District Council would no longer have the authority to grant and refuse housing applications. Indicative of the Cotswolds' uniqueness and value is that five European Special Areas of Conservation, three national nature reserves and more than 80 Sites of Special Scientific Interest are within the Cotswolds AONB. The Cotswold Voluntary Wardens Service was established in 1968 to help conserve and enhance the area, and now has more than 300 wardens. The Cotswold Way is a long-distance footpath, just over long, running the length of the AONB, mainly on the edge of the Cotswold escarpment with views over the Severn Valley and the Vale of Evesham. Places of interest. Pictured is the Garden of Sudeley Castle at Winchcombe. The present structure was built in the 15th century and may be on the site of a 12th-century castle. It is north of the spa town of Cheltenham, which has much Georgian architecture. Further south, towards Tetbury, is the fortress known as Beverston Castle, founded in 1229 by Maurice de Gaunt. In the same area is Calcot Manor, a manor house with origins in about 1300 as a tithe barn. Tetbury Market House was built in 1655. During the Middle Ages, Tetbury became an important market for Cotswold wool and yarn. Chavenage House is an Elizabethan-era manor house northwest of Tetbury. Chedworth Roman Villa, where several mosaic floors are on display, is near the Roman road known as the Fosse Way, north of the town of Corinium Dobunnorum (Cirencester). Cirencester Abbey was founded as an Augustinian monastery in 1117, and Malmesbury Abbey was one of the few English houses with a continual history from the 7th century through to the Dissolution of the Monasteries. An unusual house in this area is Quarwood, a Victorian Gothic house in Stow-on-the-Wold. The grounds, covering , include parkland, fish ponds, paddocks, garages, woodlands and seven cottages. Another is Woodchester Mansion, an unfinished, Gothic revival mansion house in Woodchester Park near Nympsfield. Newark Park is a Grade I listed country house of Tudor origins near the village of Ozleworth, Wotton-under-Edge. The house sits in an estate of at the Cotswold escarpment's southern end. Another of the many manor houses in the area, Owlpen Manor in the village of Owlpen in the Stroud district, is also Tudor and Grade I listed. Further north, Broadway Tower is a folly on Broadway Hill, near the village of Broadway, Worcestershire. To the south of the Cotswolds is Corsham Court, a country house in a park designed by Capability Brown in the town of Corsham, west of Chippenham, Wiltshire. Top attractions. According to users of the worldwide TripAdvisor travel site, in 2018 the following were among the best attractions in the Cotswolds: Transport. The Cotswolds lie between the M5, M40 and M4 motorways. The main A-roads through the area are: These all roughly follow the routes of ancient roads, some laid down by the Romans, such as Ermin Way and the Fosse Way. There are local bus services across the area, but some are infrequent. The River Thames flows from the Cotswolds and is navigable from Inglesham and Lechlade-on-Thames downstream to Oxford. West of Inglesham. the Thames and Severn Canal and the Stroudwater Navigation connected the Thames to the River Severn; this route is mostly disused nowadays but several parts are in the process of being restored. Railways. The area is bounded by two major rail routes: in the south by the main Bristol–Bath–London line (including the South Wales main line) and in the west by the Bristol–Birmingham main line. In addition, the Cotswold line runs through the Cotswolds from Oxford to Worcester, and the Golden Valley line runs across the hills from Swindon via Stroud to Gloucester, carrying fast and local services. Mainline rail services to the big cities run from railway stations such as Bath, Swindon, Oxford, Cheltenham, and Worcester. Mainline trains run by Great Western Railway to London Paddington also are available from Kemble station near Cirencester, Kingham station near Stow-on-the-Wold, Charlbury station, and Moreton-in-Marsh station. Additionally, there is the Gloucestershire Warwickshire Railway, a steam heritage railway over part of the closed Stratford–Cheltenham line, running from Cheltenham Racecourse through Gotherington, Winchcombe, and Hayles Abbey Halt to Toddington and Laverton. The preserved line has been extended to Broadway. Demographics. The population of the Cotswold local authority area in the 2021 census was 90,800, an increase of 9.6% from 82,900 in 2011. The percentage of usual residents in relationships, aged 16 and above, were: In 2021, 96.3% of people in Cotswold identified their ethnic group with the "White" category, a slight decrease from 97.8% in 2011. Over 1.3% identified as "Asian" or British Asian, 1.5% chose "Mixed or Multiple" category, 0.4% were "Black, Black British, Caribbean or African" and 0.4% chose "Other". In culture. The Cotswold region has inspired several notable English composers. In the early 1900s, Herbert Howells and Ivor Gurney took long walks together over the hills, and Gurney urged Howells to make the landscape, including the nearby Malvern Hills, the inspiration for future work. In 1916, Howells wrote his first major piece, the "Piano Quartet in A minor," inspired by the magnificent view of the Malverns; he dedicated it to "the hill at Chosen (Churchdown) and Ivor Gurney who knows it". Another contemporary of theirs, Gerald Finzi, lived in nearby Painswick. Gustav Holst, who was born in Cheltenham, spent much of his early years playing the organ in Cotswold village churches, including at Cranham, after which he titled his tune for In the Bleak Midwinter. He also called his Symphony in F major, Op. 8, H47, "The Cotswolds". Holst's friend Ralph Vaughan Williams was born at Down Ampney in the Cotswolds and, though he moved to Surrey as a boy, gave the name of his native village to the tune for Come Down, O Love Divine. His opera "Hugh the Drover" depicts life in a Cotswold village and incorporates local folk melodies. In 1988, the 6th symphony (Op. 109) of composer Derek Bourgeois was titled "A Cotswold Symphony". The Cotswolds are a popular location for scenes in films and television programmes. The 2008 film "Better Things", directed by Duane Hopkins, is set in a small Cotswold village. The fictional detective Agatha Raisin lives in the fictional Cotswold village of Carsely. Other productions filmed in the Cotswolds or nearby, at least in part, include some of the Harry Potter series (Gloucester Cathedral), "Bridget Jones's Diary" (Snowshill), "Pride and Prejudice" (Cheltenham Town Hall), and "Braveheart" (Cotswold Farm Park). The television series "Father Brown" is set in and primarily filmed in the Cotswolds. Scenes and buildings in Sudeley Castle was often featured in the series. The vicarage in Blockley was used for the main character's residence and the Anglican St Peter and St Paul church was the Roman Catholic St Mary's. Other filming locations included Guiting Power, the former hospital in Moreton-in-Marsh, Winchcombe railway station, Lower Slaughter, and St Peter's Church in Upper Slaughter. In the 2010s BBC TV series "Poldark", the location for Ross Poldark's family home, Trenwith, is Chavenage House, Tetbury, which is open to the public. Many exterior shots of village life in the "Downton Abbey" TV series were filmed in Bampton, Oxfordshire; other filming locations in that county included Swinbrook, Cogges, and Shilton. The television documentary agriculture-themed series "Clarkson's Farm" was filmed at various locations around Chipping Norton. The author Jilly Cooper is closely associated with the area, basing her fictional county of Rutshire, and its book series the "Rutshire Chronicles," on the area.
6903
1300836763
https://en.wikipedia.org/wiki?curid=6903
AC ChievoVerona
Associazione Calcio ChievoVerona, commonly referred to as ChievoVerona or simply Chievo (, ), is an Italian football club named after and representing Chievo, a suburb of 4,500 inhabitants in Verona, Veneto. It is owned since 2024 by the team's former captain Sergio Pellissier, representing a group of almost 800 stakeholders created through a crowdfunding program - the first case in Italian football. The team plays in the Serie D, the fourth level of Italian football. The club was founded in 1929 and refounded two times during its history in 1948 and 2024. It is the only football team coming from the lowest level of Italian football succeeding in climbing the whole amateur and professional pyramid until reaching Serie A for the first time in 2001–02 and European competitions the year after. It currently plays at the Stadio Aldo Olivieri. During its years as a professional club, Chievo shared the 38,402-seat Stadio Marcantonio Bentegodi stadium with its cross-town rivals Hellas Verona. History. Early years. The team was founded in 1929 by a small number of football fans from Chievo, a Verona . Initially, the club was not officially affiliated to the Italian Football Federation (FIGC), but nonetheless played several amateur tournaments and friendly matches under the denomination "Opera Nazionale Dopolavoro Chievo", a title imposed by the fascist regime. The club's formal debut in an official league was on 8 November 1931. The team colours at the time were blue and white. Chievo disbanded in 1936, however, due to economic woes but returned to play in 1948 after World War II, being registered in the regional league of (Second Division). In 1957, the team moved to the field "Carlantonio Bottagisio", where they played until 1986. In 1959, after the restructuring of the football leagues, Chievo was admitted to play the Seconda Categoria (Second Category), a regional league placed next-to-last in the Italian football pyramid. That year, Chievo changed its name to "Cardi Chievo" after a new sponsor and was quickly promoted to the Prima Categoria, from which it experienced its first-ever relegation in 1962. Series of promotions. In 1964, Luigi Campedelli, a businessman and owner of the Paluani company, was named the new Chievo chairman. Under Campedelli's presidency, Chievo climbed through the Italian football pyramid, reaching the Serie D after the 1974–75 season. Under the name "Paluani Chievo" the team was promoted to Serie C2 in 1986. Due to promotion, Chievo was forced to move to the Stadio Marcantonio Bentegodi, the main venue in Verona; another promotion to Serie C1 followed in 1989. In 1990, the team changed its name to its current one, "A.C. ChievoVerona". In 1992, President Luigi Campedelli, who had returned at the helm of the club two years before, died of a heart attack, and his son Luca Campedelli, aged just 23, became the new and youngest chairman of an Italian professional football club. Campedelli promoted Giovanni Sartori to director of football and named Alberto Malesani as the new head coach. Under Malesani, the team astonishingly won the Serie C1 and was promoted to Serie B, where city rival Hellas Verona was playing at the time. In 1997, after Malesani signed for Fiorentina, Silvio Baldini was appointed the new head coach. The following season, with Domenico Caso as the coach, saw the first dismissal of a coach during the presidency of Luca Campedelli, with Caso being fired and replaced with Lorenzo Balestro. It was during these years that the nickname "mussi volanti" ("flying donkeys") was born. It originated from supporters of their crosstown rivals Hellas, who would mock long-suffering Chievo supporters that Chievo will only be promoted if "donkeys could fly" (the equivalent of the English language falsism "if pigs could fly," denoting an impossible dream). In 2000–01, Luigi Delneri was signed as coach and led Chievo, by virtue of its third-place finish in Serie B, to promotion to Serie A, the first time in team history that it had reached the top tier of Italian football. Mussi Volanti (2001–2007). In 2001–02, Chievo's Serie A debut season, the team was most critics' choice for an instant return to Serie B. However, they became the surprise team in the league, often playing spectacular and entertaining football and even leading the league for six consecutive weeks. The club finally ended the season with a highly respectable fifth-place finish, qualifying the team to play in the UEFA Cup. Chievo's impressive performance inspired a 2002 book about soccer economics titled "Fenomeno Chievo. Economia, costume, società" by Marco Vitale. In 2002–03, Chievo debuted at the European level but were eliminated in the first round by Red Star Belgrade. The team finished the Serie A season in seventh place, again proving itself one of the better Serie A teams. The 2003–04 season, the last with Delneri at the helm, saw Chievo finish ninth. The 2004–05 season is remembered as one of the toughest ever in Chievo's history. Mario Beretta, a Serie A novice from Ternana, was named coach, but after a strong start that brought Chievo to third behind Juventus and Milan, the team slowly lost position in the league table. With three matches remaining in the season, Chievo was third-from-last, a position which would see it relegated to Serie B. As a last resort, Beretta was fired, and Maurizio D'Angelo, a former Chievo player, was appointed temporarily to replace him as coach. Morale improved, and two wins and a draw from the final three matches proved enough to keep Chievo in Serie A. In 2005–06, Giuseppe Pillon of Treviso FBC was appointed as new coach. The team experienced a return to the successful Delneri era, both in style of play and results, which resulted in Chievo ending the season in seventh and gaining a berth in the UEFA Cup. However, because of the football scandal involving several top-class teams, all of which finished higher than Chievo in the 2005–06 season, the Flying Donkeys were awarded a place in the next Champions League preliminary phase. On 14 July 2006, the verdict in the scandal was made public. Juventus, Milan and Fiorentina, who had all initially qualified for the 2006–07 Champions League, and Lazio, who had initially qualified for the 2006–07 UEFA Cup, were all banned from UEFA competition for the 2006–07 season. However, Milan were allowed to enter the Champions League after their appeal to the FIGC. Chievo took up a place in the third qualifying stage of the competition along with Milan and faced Bulgarian side Levski Sofia. Chievo lost the first leg 2–0 in Sofia and managed a 2–2 home draw on the second leg and were eliminated by a 4–2 aggregate score, with Levski advancing to the Champions League group stage. As a Champions League third round qualifying loser, Chievo was given a place in the UEFA Cup final qualifying round. On 25 August 2006, they were drawn to face Portuguese side Braga. The first leg, played on 14 September in Braga, ended in a 2–0 win for the Portuguese. The return match, played on 28 September in Verona, although won by Chievo 2–1, resulted in a 3–2 aggregate loss and the club's elimination from the competition. On 16 October 2006, following a 1–0 defeat against Torino, head coach Giuseppe Pillon was fired and replaced by Luigi Delneri, one of the original symbols of the "miracle Chievo", who had led the club to the Serie A in 2002. On 27 May 2007, the last match day of the 2006–07 Serie A season, Chievo was one of five teams in danger of falling into the last undecided relegation spot. Needing only a draw against Catania, a direct competitor in the relegation battle, Chievo lost 2–0 playing on a neutral field in Bologna. Wins by Parma, Siena and Reggina condemned Chievo to Serie B for the 2007–08 season after six seasons in the topflight. A year with the "Cadetti" (2007–08). Chievo bounced back quickly from the disappointment of their relegation on the last matchday of 2006–07, searching for an immediate promotion return to the topflight. After the expected departure of several top-quality players, including Franco Semioli, Salvatore Lanna, Matteo Brighi, Paolo Sammarco and Erjon Bogdani, the manager Delneri also parted ways with the club. Giuseppe Iachini replaced him and the captain, Lorenzo D'Anna, gave way to Sergio Pellissier at the end of the transfer window. A new squad was constructed, most notably including the arrivals of midfielders Maurizio Ciaramitaro and Simone Bentivoglio, defender César and forward Antimo Iunco. This new incarnation of the "gialloblu" were crowned winter champions (along with Bologna), en route to a 41st matchday promotion after a 1–1 draw at Grosseto left them four points clear of third-place Lecce with one match remaining. In addition to winning promotion, they were conferred with the Ali della Vittoria trophy on the final matchday of the season, their first league title of any kind in 14 years. Return in Serie A (2008–2019). In their first season return to the topflight, Chievo immediately struggled in the league, resulting in the dismissal of Iachini in November and his replacement with former Parma boss Domenico Di Carlo. After Di Carlo's appointment, Chievo managed a remarkable resurgence that led the "gialloblu" out of the relegation zone after having collected just nine points from their first 17 matches. Highlight matches included a 3–0 defeat of Lazio (who then won the 2008–09 Coppa Italia title) at the Stadio Olimpico, and a thrilling 3–3 draw away to Juventus in which captain and longtime Chievo striker Sergio Pellissier scored a late equalizer to complete his first career hat-trick. A series of hard-fought draws against top clubs Roma, Internazionale and Genoa in the final stretch of the season solidified "Ceo"'s position outside the drop zone and Serie A status was finally confirmed on matchday 37 with a home draw against Bologna. An essentially unchanged line-up earned safety the following season with four matchdays to spare. Lorenzo D'Anna remained as coach of the club for the 2018–19 season after replacing Rolando Maran during the 2017–18 season. On 13 September, Chievo were deducted 3 points after being found guilty of false accounting on exchanging players with Cesena. President Luca Campedelli was banned for three months as a result of the scheme. Chievo were officially relegated on 14 April 2019 after a 3–1 home loss to Napoli. Serie B years and league exclusion (2019–2021). In July 2021, Chievo was expelled from Serie B for the 2021–22 season for being unable to prove its financial viability due to outstanding tax payments. The club argued that there was an agreement in place during the COVID-19 pandemic that allowed them to spread the payments out over a longer period. However, after three unsuccessful appeals, the decision to bar Chievo Verona from registering to Serie B was upheld, with Cosenza taking their place in Serie B. Clivense and Serie D restart (2021–current). Over the next months following the club's exclusion, former captain Sergio Pellissier led the search for a new ownership group to allow a phoenix club to compete in Serie D under the Chievo name. However, on 21 August, Pellissier announced in an Instagram post that no owners were found in time for the Serie D registration deadline. The original Chievo club has in the meantime appealed to the Council of State against its exclusion and is currently registered in no division, albeit still with the right to apply for a spot in an amateur league of Veneto in the following weeks. Campedelli eventually opted to keep the club alive as a youth team for the 2021–22 season, while Pellissier decided instead to found a new club himself, which was admitted to Terza Categoria at the very bottom of the Italian football league system; the club, originally named FC Chievo 2021, was then renamed to FC Clivense following a legal warning from AC ChievoVerona. On 10 May 2024, Sergio Pellissier and the owners of Clivense, by then in the Serie D league, successfully acquired the logo and naming rights of the original ChievoVerona club in an auction. Later on 29 May, Clivense formally changed its denomination to AC ChievoVerona, thus becoming the legal heir to the original club, albeit maintaining white and blue as its colours. Identity. Crest. The official crest of the club depicts since 1998 Cangrande della Scala, ruler of Verona during medieval times, the shape of it taking inspiration from a historical statue located in the old town. The logo, coloured in yellow and blue, shows the full name of the club and the year of foundation. It was confirmed as Chievo's logo after a survey among the club's stakeholder in June 2024. Being founded by amateur football lovers in 1929 as an after-work sport club, at that time encouraged by the fascist regime, the first crest of Chievo included a fasces. Since 1959, after adopting yellow-blue colours, the club used the shape of a Swiss shield with different official denominations during the years, including some private company names sponsoring the club. During the 1980s, president Luigi Campedelli, a businessman owning the cake company Paluani, used the commercial logo of the company as official crest, showing often the full name on the official football shirt. Since the 1990s, after reaching professional leagues and after changing the official name into A.C. ChievoVerona, the crest included for the first time the shape of Cangrande della Scala and boasted a letter V, symbolizing the pride of representing the whole city. In 2001, the logo took its actual aspect including Fraktur font and the foundation year, until its last modernization in 2021. In the period 2001-2021 an alternative logo, representing a white ladder on a burgundy background, was in use both on shirts and club's activities, inspired by the historical logo of the Province of Verona and already used by the club in the 1930s after winning the province champion's title in the local leagues. This is a gallery of the evolution of Chievo's crest since its foundation: Colours. Chievo has worn two different colours patterns during its history: in the early foundation years until 1956 a white/blue combination, with occasional use of white/light blue and red-blue; from 1956 until 2021 a yellow-blue pattern in different styles, inspired by the crest of Verona and already used by the main football team of the city Hellas Verona. After the refoundation of 2024 the club decided to keep the white-blue combination of the origins, already used by Clivense since 2021. These are the most iconic football kits worn over the years: Nicknames. The club's historic nickname is Gialloblu (from the club colours of yellow and blue), sharing it with the more famous local rivals Hellas Verona. Local supporters often call the club Ceo, which is Venetian for Chievo. The club is also sometimes referred to as the I Mussi Volanti ("The Flying Donkeys" in the Verona dialect of Venetian). "The Flying Donkeys" nickname was originally used by fans from crosstown rivals Hellas to mock Chievo. The two clubs first met in Serie B in the mid-1990s, with Hellas chanting Quando i mussi volara, il Ceo in Serie A – "Donkeys will fly before Chievo are in Serie A." However, once Chievo earned promotion to Serie A at the end of the 2000–01 Serie B season, Chievo fans started to call themselves "The Flying Donkeys". Notable players. Note: this list includes players that have reached international status. Fans. The Clivense fan base has a few groups within it, but the best known are the North Side 94, a group of fans born in 1994 together with the promotion of the club to the Serie B. The supporters' group has given full support and to Sergio Pellissier's Clivense after the exclusion of ChievoVerona from all federal championships in 2021. Stadium. ChievoVerona shared since its promotion to the Serie C2 in 1986 the Stadio Marcantonio Bentegodi with rival team Hellas Verona. Since the refoundation in 2024 the club plays its home games at Stadio Aldo Olivieri in Verona. During the three previous years FC Clivense was based at Stadio Comunale (Phoenix Arena for sponsorship reasons) in San Martino Buon Albergo.
6904
11487766
https://en.wikipedia.org/wiki?curid=6904
Context switch
In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multiprogramming or multitasking operating system. In a traditional CPU, each process – a program in execution – uses the various CPU registers to store data and hold the current state of the running process. However, in a multitasking operating system, the operating system switches between processes or threads to allow the execution of multiple processes simultaneously. For every switch, the operating system must save the state of the currently running process, followed by loading the next process state, which will run on the CPU. This sequence of operations that stores the state of the running process and loads the following running process is called a context switch. The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance. Cost. Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources compared to unrelated non-cooperating processes). For example, in the Linux kernel, context switching involves loading the corresponding process control block (PCB) stored in the PCB table in the kernel stack to retrieve information about the state of the new process. CPU state information including the registers, stack pointer, and program counter as well as memory management information like segmentation tables and page tables (unless the old process shares the memory with the new) are loaded from the PCB for the new process. To avoid incorrect address translation in the case of the previous and current processes using different memory, the translation lookaside buffer (TLB) must be flushed. This negatively affects performance because every memory reference to the TLB will be a miss because it is empty after most context switches. Furthermore, analogous context switching happens between user threads, notably green threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call. Switching cases. There are three potential triggers for a context switch: Multitasking. Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch. Interrupt handling. Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can be "interrupted" (by a hardware in this case, which sends interrupt request to PIC) and presented with the read. For interrupts, a program called an "interrupt handler" is installed, and it is the interrupt handler that handles the interrupt from the disk. When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state. User and kernel mode switching. When the system transitions between user mode and kernel mode, a context switch is not necessary; a "mode transition" is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time. Steps. The state of the currently executing process must be saved so it can be restored when rescheduled for execution. The process state includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. This is usually stored in a data structure called a "process control block" (PCB) or "switchframe". The PCB might be stored on a per-process stack in kernel memory (as opposed to the user-mode call stack), or there may be some specific operating system-defined data structure for this information. A handle to the PCB is added to a queue of processes that are ready to run, often called the "ready queue". Since the operating system has effectively suspended the execution of one process, it can then switch context by choosing a process from the ready queue and restoring its PCB. In doing so, the program counter from the PCB is loaded, and thus execution can continue in the chosen process. Process and thread priority can influence which process is chosen from the ready queue (i.e., it may be a priority queue). Examples. The details vary depending on the architecture and operating system, but these are common scenarios. No context switch needed. Considering a general arithmetic addition operation A = B+1. The instruction is stored in the instruction register and the program counter is incremented. A and B are read from memory and are stored in registers R1, R2 respectively. In this case, B+1 is calculated and written in R1 as the final answer. This operation as there are sequential reads and writes and there's no waits for function calls used, hence no context switch/wait takes place in this case. Context switch caused by interrupt. Suppose a process A is running and a timer interrupt occurs. The user registers — program counter, stack pointer, and status register — of process A are then implicitly saved by the CPU onto the kernel stack of A. Then, the hardware switches to kernel mode and jumps into interrupt handler for the operating system to take over. Then the operating system calls the codice_1 routine to first save the general-purpose user registers of A onto A's kernel stack, then it saves A's current kernel register values into the PCB of A, restores kernel registers from the PCB of process B, and switches context, that is, changes kernel stack pointer to point to the kernel stack of process B. The operating system then returns from interrupt. The hardware then loads user registers from B's kernel stack, switches to user mode, and starts running process B from B's program counter. Performance. Context switching itself has a cost in performance, due to running the task scheduler, TLB flushes, and indirectly due to sharing the CPU cache between multiple tasks. Switching between threads of a single process can be faster than between two separate processes because threads share the same virtual memory maps, so a TLB flush is not necessary. The time to switch between two separate processes is called the process switching latency. The time to switch between two threads of the same process is called the thread switching latency. The time from when a hardware interrupt is generated to when the interrupt is serviced is called the interrupt latency. Switching between two processes in a single address space operating system can be faster than switching between two processes in an operating system with private per-process address spaces. Hardware vs. software. Context switching can be performed primarily by software or hardware. Some processors, like the Intel 80386 and its successors, have hardware support for context switches, by making use of a special data segment designated the task state segment (TSS). A task switch can be explicitly triggered with a CALL or JMP instruction targeted at a TSS descriptor in the global descriptor table. It can occur implicitly when an interrupt or exception is triggered if there is a task gate in the interrupt descriptor table (IDT). When a task switch occurs, the CPU can automatically load the new state from the TSS. As with other tasks performed in hardware, one would expect this to be rather fast; however, mainstream operating systems, including Windows and Linux, do not use this feature. This is mainly due to two reasons:
6907
7903804
https://en.wikipedia.org/wiki?curid=6907
Chakra
A chakra (; ; ) is one of the various focal points used in a variety of ancient meditation practices, collectively denominated as Tantra, part of the inner traditions of Hinduism and Buddhism. The concept of the chakra arose in Hinduism. Beliefs differ between the Indian religions: Buddhist texts mention four or five chakras, while Hindu sources often have six or seven. The modern "Western chakra system" arose from multiple sources, starting in the 1880s with H. P. Blavatsky and other Theosophists, followed by Sir John Woodroffe's 1919 book "The Serpent Power", and Charles W. Leadbeater's 1927 book "The Chakras". Psychological and other attributes, rainbow colours, and a wide range of correspondences with other systems such as alchemy, astrology, gemstones, homeopathy, Kabbalah and Tarot were added later. Etymology. Lexically, "chakra" is the Indic reflex of an ancestral Indo-European form "*kʷékʷlos", whence also "wheel" and "cycle" (). It has both literal and metaphorical uses, as in the "wheel of time" or "wheel of dharma", such as in "Rigveda" hymn verse 1.164.11, pervasive in the earliest Vedic texts. In Buddhism, especially in Theravada, the Pali noun "cakka" connotes "wheel". Within the Buddhist scriptures referred to as the Tripitaka, Shakyamuni Buddha variously refers the "dhammacakka", or "wheel of dharma", connoting that this dharma, universal in its advocacy, should bear the marks characteristic of any temporal dispensation. Shakyamuni Buddha spoke of freedom from cycles in and of themselves, whether karmic, reincarnative, liberative, cognitive or emotional. In Jainism, the term "chakra" also means "wheel" and appears in various contexts in its ancient literature. As in other Indian religions, "chakra" in esoteric theories in Jainism such as those by Buddhisagarsuri means a yogic energy center. Ancient history. The word "chakra" appears to first emerge within the Vedas, though not in the sense of psychic energy centers, rather as "chakravartin" or the king who "turns the wheel of his empire" in all directions from a center, representing his influence and power. The iconography popular in representing the "Chakras", states the scholar David Gordon White, traces back to the five symbols of yajna, the Vedic fire altar: "square, circle, triangle, half moon and dumpling". The hymn 10.136 of the "Rigveda" mentions a renunciate yogi with a female named "kunannamā". Literally, it means "she who is bent, coiled", representing both a minor goddess and one of many embedded enigmas and esoteric riddles within the "Rigveda". Some scholars, such as D.G. White and Georg Feuerstein, have suggested that she may be a reference to kundalini shakti and a precursor to the terminology associated with the chakras in later tantric traditions. Breath channels (nāḍi) are mentioned in the classical Upanishads of Hinduism from the 1st millennium BCE, but not psychic-energy chakra theories. Three classical Nadis are Ida, Pingala and Sushumna in which the central channel Sushumna is said to be foremost as per Kṣurikā-Upaniṣhad. The latter, states David Gordon White, were introduced about 8th-century CE in Buddhist texts as hierarchies of inner energy centers, such as in the "Hevajra Tantra" and "Caryāgiti". These are called by various terms such as "cakka", "padma" (lotus) or "pitha" (mound). These medieval Buddhist texts mention only four chakras, while later Hindu texts such as the "Kubjikāmata" and "Kaulajñānanirnaya" expanded the list to many more. In contrast to White, according to Feuerstein, early Upanishads of Hinduism do mention "chakras" in the sense of "psychospiritual vortices", along with other terms found in tantra: "prana" or "vayu" (life energy) along with "nadi" (energy carrying arteries). According to Gavin Flood, the ancient texts do not present "chakra" and kundalini-style yoga theories although these words appear in the earliest Vedic literature in many contexts. The "chakra" in the sense of four or more vital energy centers appear in the medieval era Hindu and Buddhist texts. The 10th century Kubjikāmatatantra describes a system of five chakras which serve as the seats of five sets of divine female beings, namely the Devīs, the Dūtīs, the Mātṛs, the Yoginīs and the Khecarīs. Overview. The Chakras are part of esoteric ideas and concepts about physiology and psychic centers that emerged across Indian traditions. The belief held that human life simultaneously exists in two parallel dimensions, one "physical body" ("sthula sarira") and other "psychological, emotional, mind, non-physical" it is called the "subtle body" ("sukshma sarira"). This subtle body is energy, while the physical body is mass. The psyche or mind plane corresponds to and interacts with the body plane, and the belief holds that the body and the mind mutually affect each other. The subtle body consists of nadi (energy channels) connected by nodes of psychic energy called "chakra". The belief grew into extensive elaboration, with some suggesting 88,000 chakras throughout the subtle body. The number of major chakras varied between various traditions, but they typically ranged between four and seven. The important chakras are stated in Hindu and Buddhist texts to be arranged in a column along the spinal cord, from its base to the top of the head, connected by vertical channels. The tantric traditions sought to master them, awaken and energize them through various breathing exercises or with assistance of a teacher. These chakras were also symbolically mapped to specific human physiological capacity, seed syllables (bija), sounds, subtle elements (tanmatra), in some cases deities, colors and other motifs. Belief in the chakra system of Hinduism and Buddhism differs from the historic Chinese system of meridians in acupuncture. Unlike the latter, the "chakra" relates to subtle body, wherein it has a position but no definite nervous node or precise physical connection. The tantric systems envision it as continually present, highly relevant and a means to psychic and emotional energy. It is useful in a type of yogic rituals and meditative discovery of radiant inner energy ("prana" flows) and mind-body connections. The meditation is aided by extensive symbology, mantras, diagrams, models (deity and mandala). The practitioner proceeds step by step from perceptible models, to increasingly abstract models where deity and external mandala are abandoned, inner self and internal mandalas are awakened. These ideas are not unique to Hindu and Buddhist traditions. Similar and overlapping concepts emerged in other cultures in the East and the West, and these are variously called by other names such as subtle body, spirit body, esoteric anatomy, sidereal body and etheric body. According to Geoffrey Samuel and Jay Johnston, professors of Religious studies known for their studies on Yoga and esoteric traditions: Contrast with classical yoga. Chakra and related beliefs have been important to the esoteric traditions, but they are not directly related to mainstream yoga. According to the Indologist Edwin Bryant and other scholars, the goals of classical yoga such as spiritual liberation (freedom, self-knowledge, moksha) is "attained entirely differently in classical yoga, and the "cakra / nadi / kundalini" physiology is completely peripheral to it." Classical traditions. The classical eastern traditions, particularly those that developed in India during the 1st millennium AD, primarily describe "nadi" and "chakra" in a "subtle body" context. To them, they are in same dimension as of the psyche-mind reality that is invisible yet real. In the "nadi" and "cakra" flow the "prana" (breath, life energy). The concept of "life energy" varies between the texts, ranging from simple inhalation-exhalation to far more complex association with breath-mind-emotions-sexual energy. This prana or essence is what vanishes when a person dies, leaving a gross body. Some of this concept states this subtle body is what withdraws within, when one sleeps. All of it is believed to be reachable, awake-able and important for an individual's body-mind health, and how one relates to other people in one's life. This subtle body network of "nadi" and "chakra" is, according to some later Indian theories and many New Age speculations, closely associated with emotions. Hindu tantra. Esoteric traditions in Hinduism mention numerous numbers and arrangements of chakras, of which a classical system of six-plus-one, the last being the Sahasrara, is most prevalent. This seven-part system, central to the core texts of hatha yoga, is one among many systems found in Hindu tantric literature. Hindu Tantra associates six Yoginis with six places in the subtle body, corresponding to the six chakras of the six-plus-one system. The Chakra methodology is extensively developed in the goddess tradition of Hinduism called Shaktism. It is an important concept along with yantras, mandalas and kundalini yoga in its practice. Chakra in Shakta tantrism means circle, an "energy center" within, as well as being a term for group rituals such as in "chakra-puja" (worship within a circle) which may or may not involve tantra practice. The chakra-based system is a part of the meditative exercises that came to be known as yoga. Within Kundalini yoga, the techniques of breathing exercises, visualizations, mudras, bandhas, kriyas, and mantras are focused on manipulating the flow of subtle energy through chakras. Buddhist tantra. The esoteric traditions in Buddhism generally teach four chakras. In some early Buddhist sources, these chakras are identified as: manipura (navel), anahata (heart), vishuddha (throat) and ushnisha kamala (crown). In one development within the Nyingma lineage of the "Mantrayana" of Tibetan Buddhism a popular conceptualization of chakras in increasing subtlety and increasing order is as follows: Nirmanakaya (gross self), Sambhogakaya (subtle self), Dharmakaya (causal self), and Mahasukhakaya (non-dual self), each vaguely and indirectly corresponding to the categories within the Shaiva "Mantramarga" universe, i.e., Svadhisthana, Anahata, Visuddha, Sahasrara, etc. However, depending on the meditational tradition, these vary between three and six. The chakras are considered psycho-spiritual constituents, each bearing meaningful correspondences to cosmic processes and their postulated Buddha counterpart. A system of five chakras is common among the Mother class of Tantras and these five chakras along with their correspondences are: Chakras play a key role in Tibetan Buddhism, and are considered to be the pivotal providence of Tantric thinking. And, the precise use of the chakras across the gamut of tantric sadhanas gives little space to doubt the primary efficacy of Tibetan Buddhism as distinct religious agency, that being that precise revelation that, without Tantra there would be no Chakras, but more importantly, without Chakras, there is no Tibetan Buddhism. The highest practices in Tibetan Buddhism point to the ability to bring the subtle pranas of an entity into alignment with the central channel, and to thus penetrate the realisation of the ultimate unity, namely, the "organic harmony" of one's individual consciousness of Wisdom with the co-attainment of All-embracing Love, thus synthesizing a direct cognition of absolute Buddhahood. According to Samuel, the buddhist esoteric systems developed cakra and nadi as "central to their soteriological process". The theories were sometimes, but not always, coupled with a unique system of physical exercises, called "yantra yoga" or "phrul khor". Chakras, according to the Bon tradition, enable the gestalt of experience, with each of the five major chakras, being psychologically linked with the five experiential qualities of unenlightened consciousness, the six realms of woe. The tsa lung practice embodied in the Trul khor lineage, unbaffles the primary channels, thus activating and circulating liberating prana. Yoga awakens the deep mind, thus bringing forth positive attributes, inherent gestalts, and virtuous qualities. In a computer analogy, the screen of one's consciousness is slated and an attribute-bearing file is called up that contains necessary positive or negative, supportive qualities. Tantric practice is said to eventually transform all experience into clear light. The practice aims to liberate from all negative conditioning, and the deep cognitive salvation of freedom from control and unity of perception and cognition. Seven chakra system. The most studied chakra system incorporates six major chakras along with a seventh centre generally not regarded as a chakra. These points are arranged vertically along the axial channel (sushumna nadi in Hindu texts, Avadhuti in some Buddhist texts). According to Gavin Flood, this system of six chakras plus the "sahasrara" "center" at the crown first appears in the "Kubjikāmata-tantra", an 11th-century Kaula work. It was this chakra system that was translated in the early 20th century by Sir John Woodroffe (also called Arthur Avalon) in his book "The Serpent Power". Avalon translated the Hindu text "Ṣaṭ-Cakra-Nirūpaṇa" meaning the examination (nirūpaṇa) of the six (ṣaṭ) chakras (cakra). The Chakras are traditionally considered meditation aids. The yogi progresses from lower chakras to the highest chakra blossoming in the crown of the head, internalizing the journey of spiritual ascent. In both the Hindu kundalini and Buddhist candali traditions, the chakras are pierced by a dormant energy residing near or in the lowest chakra. In Hindu texts she is known as Kundalini, while in Buddhist texts she is called Candali or Tummo (Tibetan: "gtum mo", "fierce one"). Below are the common new age description of these six chakras and the seventh point known as sahasrara. This new age version incorporates the Newtonian colours of the rainbow not found in any ancient Indian system. Western chakra system. History. Kurt Leland, for the Theosophical Society in America, concluded that the western chakra system was produced by an "unintentional collaboration" of many groups of people: esotericists and clairvoyants, often theosophical; Indologists; the scholar of myth, Joseph Campbell; the founders of the Esalen Institute and the psychological tradition of Carl Jung; the colour system of Charles W. Leadbeater's 1927 book "The Chakras", treated as traditional lore by some modern Indian yogis; and energy healers such as Barbara Brennan. Leland states that far from being traditional, the two main elements of the modern system, the rainbow colours and the list of qualities, first appeared together only in 1977. The concept of a set of seven chakras came to the West in the 1880s; at that time each chakra was associated with a nerve plexus. In 1918, Sir John Woodroffe, alias Arthur Avalon, translated two Indian texts, the "Ṣaṭ-Cakra-Nirūpaṇa" and the "Pādukā-Pañcaka", publishing and commenting on them both in his book "The Serpent Power" drew Western attention to the seven chakra theory. In the 1920s, each of the seven chakras was associated with an endocrine gland, a tradition that has persisted. More recently, the lower six chakras have been linked to both nerve plexuses and glands. The seven rainbow colours were added by Leadbeater in 1927; a variant system in the 1930s proposed six colours plus white. Leadbeater's theory was influenced by Johann Georg Gichtel's 1696 book "Theosophia Practica", which mentioned inner "force centres". Psychological and other attributes such as layers of the aura, developmental stages, associated diseases, Aristotelian elements, emotions, and states of consciousness were added still later. A wide range of supposed correspondences such as with alchemical metals, astrological signs and planets, foods, herbs, gemstones, homeopathic remedies, Kabbalistic spheres, musical notes, totem animals, and Tarot cards have also been proposed. New Age. In "Anatomy of the Spirit" (1996), Caroline Myss described the function of chakras as follows: "Every thought and experience you've ever had in your life gets filtered through these chakra databases. Each event is recorded into your cells...". The chakras are described as being aligned in an ascending column from the base of the spine to the top of the head. New Age practices often associate each chakra with a certain colour. In various traditions, each chakra is associated with a physiological functions, an aspect of consciousness, and a classical element; these do not correspond to those used in ancient Indian systems. The chakras are visualised as lotuses or flowers with a different number of petals in every chakra. The chakras are thought to vitalise the physical body and to be associated with interactions of a physical, emotional and mental nature. They are considered loci of life spiritual energy or prana, which is thought to flow among them along pathways called nadi. The function of the chakras is to spin and draw in this energy to keep the spiritual, mental, emotional and physical health of the body in balance. Rudolf Steiner considered the chakra system to be dynamic and evolving. He suggested that this system has become different for modern people than it was in ancient times and that it will, in turn, be radically different in future. Skeptical response. There is no scientific evidence to prove chakras exist, nor is there any meaningful way to try and measure them scientifically. The Edinburgh Skeptics Society claimed that there has never been any evidence for chakras.
6910
7903804
https://en.wikipedia.org/wiki?curid=6910
Cloning
Cloning is the process of producing individual organisms with identical genomes, either by natural or artificial means. In nature, some organisms produce clones through asexual reproduction; this reproduction of an organism by itself without a mate is known as parthenogenesis. In the field of biotechnology, cloning is the process of creating cloned organisms of cells and of DNA fragments. The artificial cloning of organisms, sometimes known as reproductive cloning, is often accomplished via somatic-cell nuclear transfer (SCNT), a cloning method in which a viable embryo is created from a somatic cell and an egg cell. In 1996, Dolly the sheep achieved notoriety for being the first mammal cloned from a somatic cell. Another example of artificial cloning is molecular cloning, a technique in molecular biology in which a single living cell is used to clone a large population of cells that contain identical DNA molecules. In bioethics, there are a variety of ethical positions regarding the practice and possibilities of cloning. The use of embryonic stem cells, which can be produced through SCNT, in some stem cell research has attracted controversy. Cloning has been proposed as a means of reviving extinct species. In popular culture, the concept of cloning—particularly human cloning—is often depicted in science fiction; depictions commonly involve themes related to identity, the recreation of historical figures or extinct species, or cloning for exploitation (e.g. cloning soldiers for warfare). Etymology. Coined by Herbert J. Webber, the term clone derives from the Ancient Greek word (), "twig", which is the process whereby a new plant is created from a twig. In botany, the term lusus was used. In horticulture, the spelling "clon" was used until the early twentieth century; the final "e" came into use to indicate the vowel is a "long o" instead of a "short o". Since the term entered the popular lexicon in a more general context, the spelling "clone" has been used exclusively. Natural cloning. Natural cloning is the production of clones without the involvement of genetic engineering techniques or human intervention (i.e. artificial cloning). Natural cloning occurs through a variety of natural mechanisms, from single-celled organisms to complex multicellular organisms, and has allowed life forms to spread for hundreds of millions of years. Versions of this reproduction method are used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves. Some of the mechanisms are explored and used in plants and animals are binary fission, budding, fragmentation, and parthenogenesis. It can also occur during some forms of asexual reproduction, when a single parent organism produces genetically identical offspring by itself. Many plants are well known for natural cloning ability, including blueberry plants, Hazel trees, the Pando trees, the Kentucky coffeetree, "Myrica", and the American sweetgum. It also occurs accidentally in the case of identical twins, which are formed when a fertilized egg splits, creating two or more embryos that carry identical DNA. Molecular cloning. Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single-stranded RNA or DNA production and a host of other molecular biology tools. Cloning of any DNA fragment essentially involves four steps Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a "cloning strategy". Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation, the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing. Cell cloning. Cloning unicellular organisms. Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media. A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth. Cloning stem cells. Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source. Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body. The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus. The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals. SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day. In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. Organism cloning. Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation, has been common practice in the horticultural world for hundreds of years. Horticultural. The term "clone" is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction. As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potatoes and bananas. Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation. Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals. Parthenogenesis. Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes, fish (for example the hammerhead shark), Cape honeybees, and lizards including the Komodo dragon and several whiptails. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example of partenogenesis is the little fire ant ("Wasmannia auropunctata"), which is native to Central and South America but has spread throughout many tropical environments. Artificial cloning of organisms. Artificial cloning of organisms may also be called "reproductive cloning". First steps. Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1924 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first steps towards cloning. Methods. Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed. If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrial genome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death. Artificial "embryo splitting" or "embryo twinning", a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, a donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos. If both embryos are successful, it gives rise to monozygotic (identical) twins. Dolly the sheep. Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year-old biological mother. Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 435 attempts before an embryo was successful. The embryo was then placed inside a female sheep that went through a normal pregnancy. She was cloned at the Roslin Institute in Scotland by British scientists Sir Ian Wilmut and Keith Campbell and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997. Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland. Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, designed to express only a distinct subset of its genes, can be redesigned to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg. However, this concept was not yet demonstrated in a mammalian system. The first mammalian cloning (resulting in Dolly) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died quite young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell. There were early claims that Dolly had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to problems with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice. Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage. Recent advances in biotechnology allowed the modification of wolf clones to make them appear similar to dire wolves by a company named Colossal Biosciences. "The company used a combination of gene-editing techniques and ancient DNA found in fossils to engineer the newborn pups." It's currently contested whether this can be considered a true dire wolf, but this shows that gene modification and possibly cloning in the future could advance. They produced three of the white "dire wolves" and scientists are more interested in how this can be used for endangered animals. Species cloned and applications. The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include: Human cloning. Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning. Two commonly discussed types of theoretical human cloning are "therapeutic cloning" and "reproductive cloning". Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, . Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. Ethical issues of cloning. There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Advocates support development of therapeutic cloning to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large. Cloning humans could lead to serious violations of human rights. Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. There is at least one religion, Raëlism, in which cloning plays a major role. Contemporary work on this topic is concerned with the ethics, adequate regulation and issues of any cloning carried out by humans, not potentially by extraterrestrials (including in the future), and largely also not replication – also described as mind cloning – of potential whole brain emulations. Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while food from cloned animals has been approved as safe by the US FDA, its use is opposed by groups concerned about food safety. In practical terms, the inclusion of "licensing requirements for embryo research projects and fertility clinics, restrictions on the commodification of eggs and sperm, and measures to prevent proprietary interests from monopolizing access to stem cell lines" in international cloning regulations has been proposed, albeit e.g. effective oversight mechanisms or cloning requirements have not been described. Cloning extinct and endangered species. Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel "Carnosaur" and the 1990 novel "Jurassic Park". The best current cloning techniques have an average success rate of 9.4 percent (and as high as 25 percent) when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. Conservation cloning. Several tissue banks have come into existence, including the "Frozen zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species. This is also referred to as "Conservation cloning". Engineers have proposed a "lunar ark" in 2021 – storing millions of seed, spore, sperm and egg samples from Earth's contemporary species in a network of lava tubes on the Moon as a genetic backup. Similar proposals have been made since at least 2008. These also include sending human customer DNA, and a proposal for "a lunar backup record of humanity" that includes genetic information by Avi Loeb et al. In 2020, the San Diego Zoo began a number of projects in partnership with the conservation organization Revive & Restore and the ViaGen Pets and Equine Company to clone individuals of genetically-impoverished endangered species. A Przewalski's horse was cloned from preserved tissue of a stallion whose genes are absent in the surviving populations of the species, which descend from twenty individuals. The clone, named Kurt, had been born to a domestic surrogate mother, and was partnered with a natural-born Przewalski's mare in order to socialize him with the species' natural behavior before being introduced to the Zoo's breeding herd. In 2023, a second clone of the original stallion, named Ollie, was born; this marked the first instance of multiple living clones of a single individual of an endangered species being alive at the same time. Also in 2020, a clone named Elizabeth Ann was produced of a female black-footed ferret that had no living descendants. While Elizabeth Ann became sterile due to secondary health complications, a pair of additional clones of the same individual, named Antonia and Noreen, were born to distinct surrogate mothers, and Antonia successfully reproduced later in the year. De-extinction. One of the most anticipated targets for cloning was once the woolly mammoth, but attempts to extract DNA from frozen mammoths have been unsuccessful, though a joint Russo-Japanese team is currently working toward this goal. In January 2011, it was reported by Yomiuri Shimbun that a team of scientists headed by Akira Iritani of Kyoto University had built upon research by Dr. Wakayama, saying that they will extract DNA from a mammoth carcass that had been preserved in a Russian laboratory and insert it into the egg cells of an Asian elephant in hopes of producing a mammoth embryo. The researchers said they hoped to produce a baby mammoth within six years. The challenges are formidable. Extensively degraded DNA that may be suitable for sequencing may not be suitable for cloning; it would have to be synthetically reconstituted. In any case, with currently available technology, DNA alone is not suitable for mammalian cloning; intact viable cell nuclei are required. Patching pieces of reconstituted mammoth DNA into an Asian elephant cell nucleus would result in an elephant-mammoth hybrid rather than a true mammoth. Moreover, true de-extinction of the wooly mammoth species would require a breeding population, which would require cloning of multiple genetically distinct but reproductively compatible individuals, multiplying both the amount of work and the uncertainties involved in the project. There are potentially other post-cloning problems associated with the survival of a reconstructed mammoth, such as the requirement of ruminants for specific symbiotic microbiota in their stomachs for digestion. Scientists at the University of Newcastle and University of New South Wales announced in March 2013 that the very recently extinct gastric-brooding frog would be the subject of a cloning attempt to resurrect the species. Many such "de-extinction" projects are being championed by the non-profit Revive & Restore. In 2022, scientists showed major limitations and the scale of challenge of genetic-editing-based de-extinction, suggesting resources spent on more comprehensive de-extinction projects such as of the woolly mammoth may currently not be well allocated and substantially limited. Their analyses "show that even when the extremely high-quality Norway brown rat (R. norvegicus) is used as a reference, nearly 5% of the genome sequence is unrecoverable, with 1,661 genes recovered at lower than 90% completeness, and 26 completely absent", complicated further by that "distribution of regions affected is not random, but for example, if 90% completeness is used as the cutoff, genes related to immune response and olfaction are excessively affected" due to which "a reconstructed Christmas Island rat would lack attributes likely critical to surviving in its natural or natural-like environment". In a 2021 online session of the Russian Geographical Society, Russia's defense minister Sergei Shoigu mentioned using the DNA of 3,000-year-old Scythian warriors to potentially bring them back to life. The idea was described as absurd at least at this point in news reports and it was noted that Scythians likely weren't skilled warriors by default. The idea of cloning Neanderthals or bringing them back to life in general is controversial but some scientists have stated that it may be possible in the future and have outlined several issues or problems with such as well as broad rationales for doing so. Unsuccessful attempts. In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last "bucardo" (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the Giant panda and Cheetah. In 2002, geneticists at the Australian Museum announced that they had replicated DNA of the thylacine (Tasmanian tiger), at the time extinct for about 65 years, using polymerase chain reaction. However, on 15 February 2005 the museum announced that it was stopping the project after tests showed the specimens' DNA had been too badly degraded by the (ethanol) preservative. On 15 May 2005 it was announced that the thylacine project would be revived, with new participation from researchers in New South Wales and Victoria. In 2003, for the first time, an extinct animal, the Pyrenean ibex mentioned above was cloned, at the Centre of Food Technology and Research of Aragon, using the preserved frozen cell nucleus of the skin samples from 2001 and domestic goat egg-cells. The ibex died shortly after birth due to physical defects in its lungs. Lifespan. After an eight-year project involving the use of a pioneering cloning technique, Japanese researchers created 25 generations of healthy cloned mice with normal lifespans, demonstrating that clones are not intrinsically shorter-lived than naturally born animals. Other sources have noted that the offspring of clones tend to be healthier than the original clones and indistinguishable from animals produced naturally. Some posited that Dolly the sheep may have aged more quickly than naturally born animals, as she died relatively early for a sheep at the age of six. Ultimately, her death was attributed to a respiratory illness, and the "advanced aging" theory is disputed. A 2016 study indicated that once cloned animals survive the first month or two of life they are generally healthy. However, early pregnancy loss and neonatal losses are still greater with cloning than natural conception or assisted reproduction (IVF). Current research is attempting to overcome these problems. In popular culture. Discussion of cloning in the popular media often presents the subject negatively. In an article in the 8 November 1993 article of "Time", cloning was portrayed in a negative way, modifying Michelangelo's "Creation of Adam" to depict Adam with five identical hands. "Newsweek" 10 March 1997 issue also critiqued the ethics of human cloning, and included a graphic depicting identical babies in beakers. The concept of cloning, particularly human cloning, has featured a wide variety of science fiction works. An early fictional depiction of cloning is Bokanovsky's Process which features in Aldous Huxley's 1931 dystopian novel "Brave New World". The process is applied to fertilized human eggs "in vitro", causing them to split into identical genetic copies of the original. Following renewed interest in cloning in the 1950s, the subject was explored further in works such as Poul Anderson's 1953 story "UN-Man", which describes a technology called "exogenesis", and Gordon Rattray Taylor's book "The Biological Time Bomb", which popularised the term "cloning" in 1963. Cloning is a recurring theme in a number of contemporary science fiction films, ranging from action films such as "Anna to the Infinite Power", "The Boys from Brazil", "Jurassic Park" (1993), "Alien Resurrection" (1997), "The 6th Day" (2000), "Resident Evil" (2002), "" (2002), "The Island" (2005), "Tales of the Abyss" (2006), and "Moon" (2009) to comedies such as Woody Allen's 1973 film "Sleeper". The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series "Doctor Who", the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples ("The Invisible Enemy", 1977) and then – in an apparent homage to the 1966 film "Fantastic Voyage" – shrunk to microscopic size to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as "The Matrix" and "Star Wars: Episode II – Attack of the Clones" have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks. Cloning humans from body parts is also a common theme in science fiction. Cloning features strongly among the science fiction conventions parodied in Woody Allen's "Sleeper", the plot of which centres around an attempt to clone an assassinated dictator from his disembodied nose. In the 2008 "Doctor Who" story "Journey's End", a duplicate version of the Tenth Doctor spontaneously grows from his severed hand, which had been cut off in a sword fight during an earlier episode. After the death of her beloved 14-year-old Coton de Tulear named Samantha in late 2017, Barbra Streisand announced that she had cloned the dog, and was now "waiting for [the two cloned pups] to get older so [she] can see if they have [Samantha's] brown eyes and her seriousness". The operation cost $50,000 through the pet cloning company ViaGen. In films such as Roger Spottiswoode's 2000 "The 6th Day", which makes use of the trope of a "vast clandestine laboratory ... filled with row upon row of 'blank' human bodies kept floating in tanks of nutrient liquid or in suspended animation", clearly fear is to be incited. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience. Genetic engineering methods are weakly represented in film; Michael Clark, writing for The Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" Cloning and identity. Science fiction has used cloning, most commonly and specifically human cloning, to raise questions of identity. "A Number" is a 2002 play by English playwright Caryl Churchill which addresses the subject of human cloning and identity, especially nature and nurture. The story, set in the near future, is structured around the conflict between a father (Salter) and his sons (Bernard 1, Bernard 2, and Michael Black) – two of whom are clones of the first one. "A Number" was adapted by Caryl Churchill for television, in a co-production between the BBC and HBO Films. In 2012, a Japanese television series named "Bunshin" was created. The story's main character, Mariko, is a woman studying child welfare in Hokkaido. She grew up always doubtful about the love from her mother, who looked nothing like her and who died nine years before. One day, she finds some of her mother's belongings at a relative's house, and heads to Tokyo to seek out the truth behind her birth. She later discovered that she was a clone. In the 2013 television series "Orphan Black", cloning is used as a scientific study on the behavioral adaptation of the clones. In a similar vein, the book "The Double" by Nobel Prize winner José Saramago explores the emotional experience of a man who discovers that he is a clone. Cloning as resurrection. Cloning has been used in fiction as a way of recreating historical figures. In the 1976 Ira Levin novel "The Boys from Brazil" and its 1978 film adaptation, Josef Mengele uses cloning to create copies of Adolf Hitler. In Michael Crichton's 1990 novel "Jurassic Park", which spawned a series of "Jurassic Park" feature films, the bioengineering company InGen develops a technique to resurrect extinct species of dinosaurs by creating cloned creatures using DNA extracted from fossils. The cloned dinosaurs are used to populate the Jurassic Park wildlife park for the entertainment of visitors. The scheme goes disastrously wrong when the dinosaurs escape their enclosures. Despite being selectively cloned as females to prevent them from breeding, the dinosaurs develop the ability to reproduce through parthenogenesis. Cloning for warfare. The use of cloning for military purposes has also been explored in several fictional works. In "Doctor Who", an alien race of armour-clad, warlike beings called Sontarans was introduced in the 1973 serial "The Time Warrior". Sontarans are depicted as squat, bald creatures who have been genetically engineered for combat. Their weak spot is a "probic vent", a small socket at the back of their neck which is associated with the cloning process. The concept of cloned soldiers being bred for combat was revisited in "The Doctor's Daughter" (2008), when the Doctor's DNA is used to create a female warrior called Jenny. The 1977 film "Star Wars" was set against the backdrop of a historical conflict called the Clone Wars. The events of this war were not fully explored until the prequel films ' (2002) and ' (2005), which depict a space war waged by a massive army of heavily armoured clone troopers that leads to the foundation of the Galactic Empire. Cloned soldiers are "manufactured" on an industrial scale, genetically conditioned for obedience and combat effectiveness. It is also revealed that the popular character Boba Fett originated as a clone of Jango Fett, a mercenary who served as the genetic template for the clone troopers. Cloning for exploitation. A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. The 2005 Kazuo Ishiguro novel "Never Let Me Go" and the 2010 film adaption are set in an alternate history in which cloned humans are created for the sole purpose of providing organ donations to naturally born humans, despite the fact that they are fully sentient and self-aware. The 2005 film "The Island" revolves around a similar plot, with the exception that the clones are unaware of the reason for their existence. The exploitation of human clones for dangerous and undesirable work was examined in the 2009 British science fiction film "Moon". In the futuristic novel "Cloud Atlas" and subsequent film, one of the story lines focuses on a genetically engineered fabricant clone named Sonmi~451, one of millions raised in an artificial "wombtank", destined to serve from birth. She is one of thousands created for manual and emotional labor; Sonmi herself works as a server in a restaurant. She later discovers that the sole source of food for clones, called 'Soap', is manufactured from the clones themselves. In the film "Us", at some point prior to the 1980s, the US Government creates clones of every citizen of the United States with the intention of using them to control their original counterparts, akin to voodoo dolls. This fails, as they were able to copy bodies, but unable to copy the souls of those they cloned. The project is abandoned and the clones are trapped exactly mirroring their above-ground counterparts' actions for generations. In the present day, the clones launch a surprise attack and manage to complete a mass-genocide of their unaware counterparts.
6911
7903804
https://en.wikipedia.org/wiki?curid=6911
Cellulose
Cellulose is an organic compound with the formula , a polysaccharide consisting of a linear chain of several hundred to many thousands of β(1→4) linked -glucose units. Cellulose is an important structural component of the primary cell wall of green plants, many forms of algae and the oomycetes. Some species of bacteria secrete it to form biofilms. Cellulose is the most abundant organic polymer on Earth. The cellulose content of cotton fibre is 90%, that of wood is 40–50%, and that of dried hemp is approximately 57%. Cellulose is mainly used to produce paperboard and paper. Smaller quantities are converted into a wide variety of derivative products such as cellophane and rayon. Conversion of cellulose from energy crops into biofuels such as cellulosic ethanol is under development as a renewable fuel source. Cellulose for industrial use is mainly obtained from wood pulp and cotton. Cellulose is also greatly affected by direct interaction with several organic liquids. Some animals, particularly ruminants and termites, can digest cellulose with the help of symbiotic micro-organisms that live in their guts, such as "Trichonympha". In human nutrition, cellulose is a non-digestible constituent of insoluble dietary fiber, acting as a hydrophilic bulking agent for feces and potentially aiding in defecation. History. Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula. Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production of rayon ("artificial silk") from cellulose began in the 1890s and cellophane was invented in 1912. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shoda. Structure and properties. Cellulose has no taste, is odorless, is hydrophilic with the contact angle of 20–30 degrees, is insoluble in water and most organic solvents, is chiral and is biodegradable. It was shown to melt at 467 °C in pulse tests made by Dauenhauer "et al." (2016). It can be broken down chemically into its glucose units by treating it with concentrated mineral acids at high temperature. Cellulose is derived from -glucose units, which condense through β(1→4)-glycosidic bonds. This linkage motif contrasts with that for α(1→4)-glycosidic bonds present in starch and glycogen. Cellulose is a straight chain polymer. Unlike starch, no coiling or branching occurs and the molecule adopts an extended and rather stiff rod-like conformation, aided by the equatorial conformation of the glucose residues. The multiple hydroxyl groups on the glucose from one chain form hydrogen bonds with oxygen atoms on the same or on a neighbour chain, holding the chains firmly together side-by-side and forming "microfibrils" with high tensile strength. This confers tensile strength in cell walls where cellulose microfibrils are meshed into a polysaccharide "matrix". The high tensile strength of plant stems and of the tree wood also arises from the arrangement of cellulose fibers intimately distributed into the lignin matrix. The mechanical role of cellulose fibers in the wood matrix responsible for its strong structural resistance, can somewhat be compared to that of the reinforcement bars in concrete, lignin playing here the role of the hardened cement paste acting as the "glue" in between the cellulose fibres. Mechanical properties of cellulose in primary plant cell wall are correlated with growth and expansion of plant cells. Live fluorescence microscopy techniques are promising in investigation of the role of cellulose in growing plant cells. Compared to starch, cellulose is also much more crystalline. Whereas starch undergoes a crystalline to amorphous transition when heated beyond 60–70 °C in water (as in cooking), cellulose requires a temperature of 320 °C and pressure of 25 MPa to become amorphous in water. Several types of cellulose are known. These forms are distinguished according to the location of hydrogen bonds between and within strands. Natural cellulose is cellulose I, with structures Iα and Iβ. Cellulose produced by bacteria and algae is enriched in Iα while cellulose of higher plants consists mainly of Iβ. Cellulose in regenerated cellulose fibers is cellulose II. The conversion of cellulose I to cellulose II is irreversible, suggesting that cellulose I is metastable and cellulose II is stable. With various chemical treatments it is possible to produce the structures cellulose III and cellulose IV. Many properties of cellulose depend on its chain length or degree of polymerization, the number of glucose units that make up one polymer molecule. Cellulose from wood pulp has typical chain lengths between 300 and 1700 units; cotton and other plant fibers as well as bacterial cellulose have chain lengths ranging from 800 to 10,000 units. Molecules with very small chain length resulting from the breakdown of cellulose are known as cellodextrins; in contrast to long-chain cellulose, cellodextrins are typically soluble in water and organic solvents. The chemical formula of cellulose is (C6H10O5)n where n is the degree of polymerization and represents the number of glucose groups. Plant-derived cellulose is usually found in a mixture with hemicellulose, lignin, pectin and other substances, while bacterial cellulose is quite pure, has a much higher water content and higher tensile strength due to higher chain lengths. Cellulose consists of fibrils with crystalline and amorphous regions. These cellulose fibrils may be individualized by mechanical treatment of cellulose pulp, often assisted by chemical oxidation or enzymatic treatment, yielding semi-flexible cellulose nanofibrils generally 200 nm to 1 μm in length depending on the treatment intensity. Cellulose pulp may also be treated with strong acid to hydrolyze the amorphous fibril regions, thereby producing short rigid cellulose nanocrystals a few 100 nm in length. These nanocelluloses are of high technological interest due to their self-assembly into cholesteric liquid crystals, production of hydrogels or aerogels, use in nanocomposites with superior thermal and mechanical properties, and use as Pickering stabilizers for emulsions. Processing. Biosynthesis. In plants cellulose is synthesized at the plasma membrane by rosette terminal complexes (RTCs). The RTCs are hexameric protein structures, approximately 25 nm in diameter, that contain the cellulose synthase enzymes that synthesise the individual cellulose chains. Each RTC floats in the cell's plasma membrane and "spins" a microfibril into the cell wall. RTCs contain at least three different cellulose synthases, encoded by "CesA" ("Ces" is short for "cellulose synthase") genes, in an unknown stoichiometry. Separate sets of "CesA" genes are involved in primary and secondary cell wall biosynthesis. There are known to be about seven subfamilies in the plant "CesA" superfamily, some of which include the more cryptic, tentatively-named "Csl" (cellulose synthase-like) enzymes. These cellulose syntheses use UDP-glucose to form the β(1→4)-linked cellulose. Bacterial cellulose is produced using the same family of proteins, although the gene is called "BcsA" for "bacterial cellulose synthase" or "CelA" for "cellulose" in many instances. In fact, plants acquired "CesA" from the endosymbiosis event that produced the chloroplast. All cellulose synthases known belongs to glucosyltransferase family 2 (GT2). Cellulose synthesis requires chain initiation and elongation, and the two processes are separate. Cellulose synthase ("CesA") initiates cellulose polymerization using a steroid primer, sitosterol-beta-glucoside, and UDP-glucose. It then utilises UDP-D-glucose precursors to elongate the growing cellulose chain. A cellulase may function to cleave the primer from the mature chain. Cellulose is also synthesised by tunicate animals, particularly in the tests of ascidians (where the cellulose was historically termed "tunicine" (tunicin)). Breakdown (cellulolysis). Cellulolysis is the process of breaking down cellulose into smaller polysaccharides called cellodextrins or completely into glucose units; this is a hydrolysis reaction. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides. However, this process can be significantly intensified in a proper solvent, e.g. in an ionic liquid. Most mammals have limited ability to digest dietary fibre such as cellulose. Some ruminants like cows and sheep contain certain symbiotic anaerobic bacteria (such as "Cellulomonas" and "Ruminococcus" spp.) in the flora of the rumen, and these bacteria produce enzymes called cellulases that hydrolyze cellulose. The breakdown products are then used by the bacteria for proliferation. The bacterial mass is later digested by the ruminant in its digestive system (stomach and small intestine). Horses use cellulose in their diet by fermentation in their hindgut. Some termites contain in their hindguts certain flagellate protozoa producing such enzymes, whereas others contain bacteria or may produce cellulase. The enzymes used to cleave the glycosidic linkage in cellulose are glycoside hydrolases including endo-acting cellulases and exo-acting glucosidases. Such enzymes are usually secreted as part of multienzyme complexes that may include dockerins and carbohydrate-binding modules. Breakdown (thermolysis). At temperatures above 350 °C, cellulose undergoes thermolysis (also called 'pyrolysis'), decomposing into solid char, vapors, aerosols, and gases such as carbon dioxide. Maximum yield of vapors which condense to a liquid called "bio-oil" is obtained at 500 °C. Semi-crystalline cellulose polymers react at pyrolysis temperatures (350–600 °C) in a few seconds; this transformation has been shown to occur via a solid-to-liquid-to-vapor transition, with the liquid (called "intermediate liquid cellulose" or "molten cellulose") existing for only a fraction of a second. Glycosidic bond cleavage produces short cellulose chains of two-to-seven monomers comprising the melt. Vapor bubbling of intermediate liquid cellulose produces aerosols, which consist of short chain anhydro-oligomers derived from the melt. Continuing decomposition of molten cellulose produces volatile compounds including levoglucosan, furans, pyrans, light oxygenates, and gases via primary reactions. Within thick cellulose samples, volatile compounds such as levoglucosan undergo 'secondary reactions' to volatile products including pyrans and light oxygenates such as glycolaldehyde. Hemicellulose. Hemicelluloses are polysaccharides related to cellulose that comprises about 20% of the biomass of land plants. In contrast to cellulose, hemicelluloses are derived from several sugars in addition to glucose, especially xylose but also including mannose, galactose, rhamnose, and arabinose. Hemicelluloses consist of shorter chains – between 500 and 3000 sugar units. Furthermore, hemicelluloses are branched, whereas cellulose is unbranched. Regenerated cellulose. Cellulose is soluble in several kinds of media, several of which are the basis of commercial technologies. These dissolution processes are reversible and are used in the production of regenerated celluloses (such as viscose and cellophane) from dissolving pulp. The most important solubilizing agent is carbon disulfide in the presence of alkali. Other agents include Schweizer's reagent, "N"-methylmorpholine "N"-oxide, and lithium chloride in dimethylacetamide. In general, these agents modify the cellulose, rendering it soluble. The agents are then removed concomitant with the formation of fibers. Cellulose is also soluble in many kinds of ionic liquids. The history of regenerated cellulose is often cited as beginning with George Audemars, who first manufactured regenerated nitrocellulose fibers in 1855. Although these fibers were soft and strong -resembling silk- they had the drawback of being highly flammable. Hilaire de Chardonnet perfected production of nitrocellulose fibers, but manufacturing of these fibers by his process was relatively uneconomical. In 1890, L.H. Despeissis invented the cuprammonium process – which uses a cuprammonium solution to solubilize cellulose – a method still used today for production of artificial silk. In 1891, it was discovered that treatment of cellulose with alkali and carbon disulfide generated a soluble cellulose derivative known as viscose. This process, patented by the founders of the Viscose Development Company, is the most widely used method for manufacturing regenerated cellulose products. Courtaulds purchased the patents for this process in 1904, leading to significant growth of viscose fiber production. By 1931, expiration of patents for the viscose process led to its adoption worldwide. Global production of regenerated cellulose fiber peaked in 1973 at 3,856,000 tons. Regenerated cellulose can be used to manufacture a wide variety of products. While the first application of regenerated cellulose was as a clothing textile, this class of materials is also used in the production of disposable medical devices as well as fabrication of artificial membranes. Cellulose esters and ethers. The hydroxyl groups (−OH) of cellulose can be partially or fully reacted with various reagents to afford derivatives with useful properties like mainly cellulose esters and cellulose ethers (−OR). In principle, although not always in current industrial practice, cellulosic polymers are renewable resources. Ester derivatives include: Cellulose acetate and cellulose triacetate are film- and fiber-forming materials that find a variety of uses. Nitrocellulose was initially used as an explosive and was an early film forming material. When plasticized with camphor, nitrocellulose gives celluloid. Cellulose Ether derivatives include: The sodium carboxymethyl cellulose can be cross-linked to give the croscarmellose sodium (E468) for use as a disintegrant in pharmaceutical formulations. Furthermore, by the covalent attachment of thiol groups to cellulose ethers such as sodium carboxymethyl cellulose, ethyl cellulose or hydroxyethyl cellulose mucoadhesive and permeation enhancing properties can be introduced. Thiolated cellulose derivatives (see thiomers) exhibit also high binding properties for metal ions. Commercial applications. Cellulose for industrial use is mainly obtained from wood pulp and from cotton. Aspirational. Energy crops: The major combustible component of non-food energy crops is cellulose, with lignin second. Non-food energy crops produce more usable energy than edible energy crops (which have a large starch component), but still compete with food crops for agricultural land and water resources. Typical non-food energy crops include industrial hemp, switchgrass, "Miscanthus", "Salix" (willow), and "Populus" (poplar) species. A strain of "Clostridium" bacteria found in zebra dung, can convert nearly any form of cellulose into butanol fuel. Another possible application is as Insect repellents. Dung-geneering. Cellulose has been extracted from cow dung using pressurized spinning from a horizontal vessel capable of structuring small structure nano-fibers Top HPMC Manufacturers and Suppliers. Source:
6916
45853341
https://en.wikipedia.org/wiki?curid=6916
Colony
A colony is a territory subject to a form of foreign rule, which rules the territory and its indigenous peoples separated from the foreign rulers, the colonizer, and their "metropole" (or "mother country"). This separated rule was often organized into colonial empires, with their metropoles at their centers, making colonies neither annexed or even integrated territories, nor client states. Particularly new imperialism and its colonialism advanced this separated rule and its lasting coloniality. Colonies were most often set up and colonized for exploitation and possibly settlement by colonists. The term colony originates from the ancient Roman , a type of Roman settlement. Derived from "colonus" (farmer, cultivator, planter, or settler), it carries with it the sense of 'farm' and 'landed estate'. Furthermore, the term was used to refer to the older Greek "apoikia" (), which were overseas settlements by ancient Greek city-states. The city that founded such a settlement became known as its "metropolis" ("mother-city"). Since early-modern times, historians, administrators, and political scientists have generally used the term "colony" to refer mainly to the many different overseas territories of particularly European states between the 15th and 20th centuries CE, with colonialism and decolonization as corresponding phenomena. While colonies often developed from trading outposts or territorial claims, such areas do not need to be a product of colonization, nor become colonially organized territories. Territories furthermore do not need to have been militarily conquered and occupied to come under colonial rule and to be considered de facto colonies, instead neocolonial exploitation of dependency or imperialist use of power to intervene to force policy, might make a territory be considered a colony, which broadens the concept, including indirect rule or puppet states (contrasted by more independent types of client states such as vassal states). Subsequently, some historians have used the term "informal colony" to refer to a country under a "de facto" control of another state. Though the broadening of the concept is often contentious. Contemporarily colonies are identified and organized as not sufficiently self-governed dependent territories. Other past colonies have become either sufficiently incorporated and self-governed, or independent, with some to a varying degree dominated by remaining colonial settler societies or neocolonialism. Concept. The word "colony" comes from the Latin word , used for ancient Roman outposts and eventually for cities. This in turn derives from the word , which referred to a Roman tenant farmer. Settlements that began as Roman include cities from Cologne (which retains this history in its name) to Belgrade to York. A telltale sign of a settlement within the Roman sphere of influence once being a Roman colony is a city centre with a grid pattern. With a long and changing history of use colonies have been distinguished from "settler colonies", which are the more particular type of a settlement or community and not so much territorial. Current colonies. The Special Committee on Decolonization maintains the United Nations list of non-self-governing territories, which identifies areas the United Nations (though not without controversy) believes are colonies. Given that dependent territories have varying degrees of autonomy and political power in the affairs of the controlling state, there is disagreement over the classification of "colony". Israel. A number of academic studies analyze Israel through settler-colonial frameworks: Gershon Shafir (1996) characterizes early Zionism as a settler-colonial movement in "Land, Labor and the Origins of the Israeli-Palestinian Conflict". Patrick Wolfe (2006) includes Israel among settler colonial societies in the "Journal of Genocide Research". South Africa has formally accused Israel of practicing apartheid against Palestinians. In 2023, South Africa filed a case at the International Court of Justice alleging that Israel violates the International Convention on the Suppression and Punishment of the Crime of Apartheid. During oral arguments before the court in 2024, South Africa further described Israel as a settler colonial state, asserting that its occupation of Palestinian territory is “indistinguishable from settler colonialism.” This follows earlier reports from 2021 and 2022 by Human Rights Watch and Amnesty International, which both concluded that Israel maintains a system of apartheid over Palestinians.
6918
40310821
https://en.wikipedia.org/wiki?curid=6918
Rod (optical phenomenon)
In cryptozoology and ufology, "rods" (also known as "skyfish", "air rods", or "solar entities") are elongated visual artifacts appearing in photographic images and video recordings. Most optical analyses to date have concluded that the images are insects moving across the frame as the photo is being captured, although cryptozoologists and ufologists claim that they are paranormal in nature. Optical analysis. Robert Todd Carroll (2003), having consulted an entomologist (Doug Yanega), identified rods as images of flying insects recorded over several cycles of wing-beating on video recording devices. The insect captured on image a number of times, while propelling itself forward, gives the illusion of a single elongated rod-like body, with bulges. A 2000 report by staff at "The Straight Dope" also explained rods as such phenomena, namely tricks of light which result from how (primarily video) images of flying insects are recorded and played back, adding that investigators have shown the rod-like bodies to be a result of motion blur, if the camera is shooting with relatively long exposure times. The claims of these being extraordinary creatures, possibly alien, have been advanced by either people with active imaginations, or hoaxers. In August 2005, China Central Television (CCTV) aired a two-part documentary about flying rods in China. It reported the events from May to June of the same year at Tonghua Zhenguo Pharmaceutical Company in Tonghua City, Jilin Province, which debunked the flying rods. Surveillance cameras in the facility's compound captured video footage of flying rods identical to those shown in Jose Escamilla's video. Getting no satisfactory answer to the phenomenon, curious scientists at the facility decided that they would try to solve the mystery by attempting to catch these airborne creatures. Huge nets were set up and the same surveillance cameras then captured images of rods flying into the trap. When the nets were inspected, the "rods" were no more than regular moths and other ordinary flying insects. Subsequent investigations proved that the appearance of flying rods on video was an optical illusion created by the slower recording speed of the camera. After attending a lecture by Jose Escamilla, UFO investigator Robert Sheaffer wrote that "some of his 'rods' were obviously insects zipping across the field at a high angular rate" and others appeared to be "appendages" which were birds' wings blurred by the camera exposure. Paranormal claims. Various paranormal interpretations of this phenomenon appear in popular culture. One of the more outspoken proponents of rods as alien life forms was Jose Escamilla, who claimed to have been the first to film them on March 19, 1994, in Roswell, New Mexico, while attempting to film a UFO. Escamilla later made additional videos and embarked on lecture tours to promote his claims.
6920
7903804
https://en.wikipedia.org/wiki?curid=6920
Column
A column or pillar in architecture and structural engineering is a structural element that transmits, through compression, the weight of the structure above to other structural elements below. In other words, a column is a compression member. The term "column" applies especially to a large round support (the shaft of the column) with a capital and a base or pedestal, which is made of stone, or appearing to be so. A small wooden or metal support is typically called a "post". Supports with a rectangular or other non-round section are usually called "piers". For the purpose of wind or earthquake engineering, columns may be designed to resist lateral forces. Other compression members are often termed "columns" because of the similar stress conditions. Columns are frequently used to support beams or arches on which the upper parts of walls or ceilings rest. In architecture, "column" refers to such a structural element that also has certain proportional and decorative features. These beautiful columns are available in a broad selection of styles and designs in round tapered, round straight, or square shaft styles. A column might also be a decorative element not needed for structural purposes; many columns are engaged, that is to say form part of a wall. A long sequence of columns joined by an entablature is known as a colonnade. History. Antiquity. All significant Iron Age civilizations of the Near East and Mediterranean made some use of columns. Egyptian. In ancient Egyptian architecture as early as 2600 BC, the architect Imhotep made use of stone columns whose surface was carved to reflect the organic form of bundled reeds, like papyrus, lotus and palm. In later Egyptian architecture faceted cylinders were also common. Their form is thought to derive from archaic reed-built shrines. Carved from stone, the columns were highly decorated with carved and painted hieroglyphs, texts, ritual imagery and natural motifs. Egyptian columns are famously present in the Great Hypostyle Hall of Karnak (), where 134 columns are lined up in sixteen rows, with some columns reaching heights of 24 metres. One of the most important type are the papyriform columns. The origin of these columns goes back to the 5th Dynasty. They are composed of lotus (papyrus) stems which are drawn together into a bundle decorated with bands: the capital, instead of opening out into the shape of a bellflower, swells out and then narrows again like a flower in bud. The base, which tapers to take the shape of a half-sphere like the stem of the lotus, has a continuously recurring decoration of stipules. Greek and Roman. The Minoans used whole tree-trunks, usually turned upside down in order to prevent re-growth, stood on a base set in the stylobate (floor base) and topped by a simple round capital. These were then painted as in the most famous Minoan palace of Knossos. The Minoans employed columns to create large open-plan spaces, light-wells and as a focal point for religious rituals. These traditions were continued by the later Mycenaean civilization, particularly in the megaron or hall at the heart of their palaces. The importance of columns and their reference to palaces and therefore authority is evidenced in their use in heraldic motifs such as the famous lion-gate of Mycenae where two lions stand each side of a column. Being made of wood these early columns have not survived, but their stone bases have and through these we may see their use and arrangement in these palace buildings. The Egyptians, Persians and other civilizations mostly used columns for the practical purpose of holding up the roof inside a building, preferring outside walls to be decorated with reliefs or painting, but the Ancient Greeks, followed by the Romans, loved to use them on the outside as well, and the extensive use of columns on the interior and exterior of buildings is one of the most characteristic features of classical architecture, in buildings like the Parthenon. The Greeks developed the classical orders of architecture, which are most easily distinguished by the form of the column and its various elements. Their Doric, Ionic, and Corinthian orders were expanded by the Romans to include the Tuscan and Composite orders. Persian. Some of the most elaborate columns in the ancient world were those of the Persians, especially the massive stone columns erected in Persepolis. They included double-bull structures in their capitals. The Hall of Hundred Columns at Persepolis, measuring 70 × 70 metres, was built by the Achaemenid king Darius I (524–486 BC). Many of the ancient Persian columns are standing, some being more than 30 metres tall. Tall columns with bull's head capitals were used for porticoes and to support the roofs of the hypostyle hall, partly inspired by the ancient Egyptian precedent. Since the columns carried timber beams rather than stone, they could be taller, slimmer and more widely spaced than Egyptian ones. South Asia. Indo-Corinthian capitals are capitals crowning columns or pilasters, which can be found in the northwestern Indian subcontinent, and usually combine Hellenistic and Indian elements. These capitals are typically dated to the first centuries of the Common Era, and constitute an important aspect of Greco-Buddhist art. Indo-Corinthian capitals display a design and foliage structure which is derived from the academic Corinthian capital developed in Greece. Its importation to India followed the road of Hellenistic expansion in the East in the centuries after the conquests of Alexander the Great. In particular the Greco-Bactrian kingdom, centered on Bactria (today's northern Afghanistan), upheld the type at the doorstep of India, in such places as Ai-Khanoum until the end of the 2nd century BCE. In India, the design was often adapted, usually taking a more elongated form and sometimes being combined with scrolls, generally within the context of Buddhist stupas and temples. Middle Ages. Columns, or at least large structural exterior ones, became much less significant in the architecture of the Middle Ages. The classical forms were abandoned in both Byzantine and Romanesque architecture in favour of more flexible forms, with capitals often using various types of foliage decoration, and in the West scenes with figures carved in relief. During the Romanesque period, builders continued to reuse and imitate ancient Roman columns wherever possible; where new, the emphasis was on elegance and beauty, as illustrated by twisted columns. Often they were decorated with mosaics. Renaissance and later styles. Renaissance architecture was keen to revive the classical vocabulary and styles, and the informed use and variation of the classical orders remained fundamental to the training of architects throughout Baroque, Rococo and Neo-classical architecture. Structure. Early columns were constructed of stone, some out of a single piece of stone. Monolithic columns are among the heaviest stones used in architecture. Other stone columns are created out of multiple sections of stone, mortared or dry-fit together. In many classical sites, sectioned columns were carved with a centre hole or depression so that they could be pegged together, using stone or metal pins. The design of most classical columns incorporates entasis (the inclusion of a slight outward curve in the sides) plus a reduction in diameter along the height of the column, so that the top is as little as 83% of the bottom diameter. This reduction mimics the parallax effects which the eye expects to see, and tends to make columns look taller and straighter than they are while entasis adds to that effect. There are flutes and fillets that run up the shaft of columns. The flute is the part of the column that is indented in with a semi circular shape. The fillet of the column is the part between each of the flutes on the Ionic order columns. The flute width changes on all tapered columns as it goes up the shaft and stays the same on all non tapered columns. This was done to the columns to add visual interest to them. The Ionic and the Corinthian are the only orders that have fillets and flutes. The Doric style has flutes but not fillets. Doric flutes are connected at a sharp point where the fillets are located on Ionic and Corinthian order columns. Nomenclature. Most classical columns arise from a basis, or base, that rests on the stylobate, or foundation, except for those of the Doric order, which usually rest directly on the stylobate. The basis may consist of several elements, beginning with a wide, square slab known as a plinth. The simplest bases consist of the plinth alone, sometimes separated from the column by a convex circular cushion known as a torus. More elaborate bases include two toruses, separated by a concave section or channel known as a scotia or trochilus. Scotiae could also occur in pairs, separated by a convex section called an astragal, or bead, narrower than a torus. Sometimes these sections were accompanied by still narrower convex sections, known as annulets or fillets. At the top of the shaft is a capital, upon which the roof or other architectural elements rest. In the case of Doric columns, the capital usually consists of a round, tapering cushion, or echinus, supporting a square slab, known as an abax or abacus. Ionic capitals feature a pair of volutes, or scrolls, while Corinthian capitals are decorated with reliefs in the form of acanthus leaves. Either type of capital could be accompanied by the same moldings as the base. In the case of free-standing columns, the decorative elements atop the shaft are known as a finial. Modern columns may be constructed out of steel, poured or precast concrete, or brick, left bare or clad in an architectural covering, or veneer. Used to support an arch, an impost, or pier, is the topmost member of a column. The bottom-most part of the arch, called the springing, rests on the impost. Equilibrium, instability, and loads. As the axial load on a perfectly straight slender column with elastic material properties is increased in magnitude, this ideal column passes through three states: stable equilibrium, neutral equilibrium, and instability. The straight column under load is in stable equilibrium if a lateral force, applied between the two ends of the column, produces a small lateral deflection which disappears and the column returns to its straight form when the lateral force is removed. If the column load is gradually increased, a condition is reached in which the straight form of equilibrium becomes so-called neutral equilibrium, and a small lateral force will produce a deflection that does not disappear and the column remains in this slightly bent form when the lateral force is removed. The load at which neutral equilibrium of a column is reached is called the critical or buckling load. The state of instability is reached when a slight increase of the column load causes uncontrollably growing lateral deflections leading to complete collapse. For an axially loaded straight column with any end support conditions, the equation of static equilibrium, in the form of a differential equation, can be solved for the deflected shape and critical load of the column. With hinged, fixed or free end support conditions the deflected shape in neutral equilibrium of an initially straight column with uniform cross section throughout its length always follows a partial or composite sinusoidal curve shape, and the critical load is given by formula_1 where "r" = radius of gyration of column cross-section which is equal to the square root of (I/A), "K" = ratio of the longest half sine wave to the actual column length, "E""t" = tangent modulus at the stress "F"cr, and "KL" = effective length (length of an equivalent hinged-hinged column). From Equation (2) it can be noted that the buckling strength of a column is inversely proportional to the square of its length. When the critical stress, "F"cr ("F"cr ="P"cr/"A", where "A" = cross-sectional area of the column), is greater than the proportional limit of the material, the column is experiencing inelastic buckling. Since at this stress the slope of the material's stress-strain curve, "E""t" (called the tangent modulus), is smaller than that below the proportional limit, the critical load at inelastic buckling is reduced. More complex formulas and procedures apply for such cases, but in its simplest form the critical buckling load formula is given as Equation (3), formula_2 A column with a cross section that lacks symmetry may suffer torsional buckling (sudden twisting) before, or in combination with, lateral buckling. The presence of the twisting deformations renders both theoretical analyses and practical designs rather complex. Eccentricity of the load, or imperfections such as initial crookedness, decreases column strength. If the axial load on the column is not concentric, that is, its line of action is not precisely coincident with the centroidal axis of the column, the column is characterized as eccentrically loaded. The eccentricity of the load, or an initial curvature, subjects the column to immediate bending. The increased stresses due to the combined axial-plus-flexural stresses result in a reduced load-carrying ability. Column elements are considered to be massive if their smallest side dimension is equal to or more than 400 mm. Massive columns have the ability to increase in carrying strength over long time periods (even during periods of heavy load). Taking into account the fact, that possible structural loads may increase over time as well (and also the threat of progressive failure), massive columns have an advantage compared to non-massive ones. Extensions. When a column is too long to be built or transported in one piece, it has to be extended or spliced at the construction site. A reinforced concrete column is extended by having the steel reinforcing bars protrude a few inches or feet above the top of the concrete, then placing the next level of reinforcing bars to overlap, and pouring the concrete of the next level. A steel column is extended by welding or bolting splice plates on the flanges and webs or walls of the columns to provide a few inches or feet of load transfer from the upper to the lower column section. A timber column is usually extended by the use of a steel tube or wrapped-around sheet-metal plate bolted onto the two connecting timber sections. Foundations. A column that carries the load down to a foundation must have means to transfer the load without overstressing the foundation material. Reinforced concrete and masonry columns are generally built directly on top of concrete foundations. When seated on a concrete foundation, a steel column must have a base plate to spread the load over a larger area, and thereby reduce the bearing pressure. The base plate is a thick, rectangular steel plate usually welded to the bottom end of the column. Orders. The Roman author Vitruvius, relying on the writings (now lost) of Greek authors, tells us that the ancient Greeks believed that their Doric order developed from techniques for building in wood. The earlier smoothed tree-trunk was replaced by a stone cylinder. Doric order. The Doric order is the oldest and simplest of the classical orders. It is composed of a vertical cylinder that is wider at the bottom. It generally has neither a base nor a detailed capital. It is instead often topped with an inverted frustum of a shallow cone or a cylindrical band of carvings. It is often referred to as the masculine order because it is represented in the bottom level of the Colosseum and the Parthenon, and was therefore considered to be able to hold more weight. The height-to-thickness ratio is about 8:1. The shaft of a Doric Column is almost always fluted. The Greek Doric, developed in the western Dorian region of Greece, is the heaviest and most massive of the orders. It rises from the stylobate without any base; it is from four to six times as tall as its diameter; it has twenty broad flutes; the capital consists simply of a banded necking swelling out into a smooth echinus, which carries a flat square abacus; the Doric entablature is also the heaviest, being about one-fourth the height column. The Greek Doric order was not used after c. 100 B.C. until its “rediscovery” in the mid-eighteenth century. Tuscan order. The Tuscan order, also known as Roman Doric, is also a simple design, the base and capital both being series of cylindrical disks of alternating diameter. The shaft is almost never fluted. The proportions vary, but are generally similar to Doric columns. Height to width ratio is about 7:1. Ionic order. The Ionic column is considerably more complex than the Doric or Tuscan. It usually has a base and the shaft is often fluted (it has grooves carved up its length). The capital features a volute, an ornament shaped like a scroll, at the four corners. The height-to-thickness ratio is around 9:1. Due to the more refined proportions and scroll capitals, the Ionic column is sometimes associated with academic buildings. Ionic style columns were used on the second level of the Colosseum. Corinthian order. The Corinthian order is named for the Greek city-state of Corinth, to which it was connected in the period. However, according to the architectural historian Vitruvius, the column was created by the sculptor Callimachus, probably an Athenian, who drew acanthus leaves growing around a votive basket. In fact, the oldest known Corinthian capital was found in Bassae, dated at 427 BC. It is sometimes called the feminine order because it is on the top level of the Colosseum and holding up the least weight, and also has the slenderest ratio of thickness to height. Height to width ratio is about 10:1. Composite order. The Composite order draws its name from the capital being a composite of the Ionic and Corinthian capitals. The acanthus of the Corinthian column already has a scroll-like element, so the distinction is sometimes subtle. Generally the Composite is similar to the Corinthian in proportion and employment, often in the upper tiers of colonnades. Height to width ratio is about 11:1 or 12:1. Solomonic. A Solomonic column, sometimes called "barley sugar", begins on a base and ends in a capital, which may be of any order, but the shaft twists in a tight spiral, producing a dramatic, serpentine effect of movement. Solomonic columns were developed in the ancient world, but remained rare there. A famous marble set, probably 2nd century, was brought to Old St. Peter's Basilica by Constantine I, and placed round the saint's shrine, and was thus familiar throughout the Middle Ages, by which time they were thought to have been removed from the Temple of Jerusalem. The style was used in bronze by Bernini for his spectacular St. Peter's baldachin, actually a ciborium (which displaced Constantine's columns), and thereafter became very popular with Baroque and Rococo church architects, above all in Latin America, where they were very often used, especially on a small scale, as they are easy to produce in wood by turning on a lathe (hence also the style's popularity for spindles on furniture and stairs). Caryatid. A Caryatid is a sculpted female figure serving as an architectural support taking the place of a column or a pillar supporting an entablature on her head. The Greek term literally means "maidens of Karyai", an ancient town of Peloponnese. Engaged columns. In architecture, an engaged column is a column embedded in a wall and partly projecting from the surface of the wall, sometimes defined as semi or three-quarter detached. Engaged columns are rarely found in classical Greek architecture, and then only in exceptional cases, but in Roman architecture they exist in abundance, most commonly embedded in the cella walls of pseudoperipteral buildings. Pillar tombs. Pillar tombs are monumental graves, which typically feature a single, prominent pillar or column, often made of stone. A number of world cultures incorporated pillars into tomb structures. In the ancient Greek colony of Lycia in Anatolia, one of these edifices is located at the tomb of Xanthos. In the town of Hannassa in southern Somalia, ruins of houses with archways and courtyards have also been found along with other pillar tombs, including a rare octagonal tomb.
6921
31192532
https://en.wikipedia.org/wiki?curid=6921
Carmilla
Carmilla is an 1872 Gothic novella by Irish author Joseph Sheridan Le Fanu. It is one of the earliest known works of vampire literature, predating Bram Stoker's "Dracula" (1897) by 25 years. First published as a serial in "The Dark Blue" (1871–72), the story is narrated by a young woman who is preyed upon by a female vampire named "Carmilla". The titular character is the prototypical example of the fictional lesbian vampire, expressing romantic desires toward the protagonist. Since its publication, "Carmilla" has often been regarded as one of the most influential vampire stories of all time. The work tells the fictional story of Laura, a young woman living in a secluded Austrian castle, who becomes the object of both affection and predation by the enigmatic Carmilla. The female vampire gradually becomes drawn to Laura, leading to a complex and dangerous relationship marked by both romantic desires and vampiric violence. The novella was one of the first works of Gothic fiction to portray female empowerment, as Carmilla is the opposite of male vampires, since she is actually involved with her victims both emotionally and physically. In the novella, Le Fanu challenges the Victorian view of women as merely being useful possessions of men, depending on them and needing their guardianship. The character is also one of the first fictional figures to represent the concept of dualism, which is presented in the story through the repeated contrasting natures of both vampires and humans, as well as lesbian and heterosexual traits. Critics have stated that "Carmilla" exhibits many of the early traits of Gothic fiction, including a supernatural figure, an old castle, a strange atmosphere, and ominous elements. "Carmilla" deeply defined the vampire fiction genre and Gothic horror in general, having directly influenced later horror writers such as Bram Stoker, M.R. James, Henry James, and others. Due to its popularity, the work has been anthologised, having been adapted extensively for films, movies, operas, video games, comics, songs, cartoons, television, and other media since the late 19th century. Publication. "Carmilla", serialised in the literary magazine "The Dark Blue" in late 1871 and early 1872, was reprinted in Le Fanu's short-story collection "In a Glass Darkly" (1872). Comparing the work of two illustrators of the story, David Henry Friston and Michael Fitzgerald—whose work appears in the magazine article but not in modern printings of the book—reveals inconsistencies in the characters' depictions. Consequently, confusion has arisen relating the pictures to the plot. Isabella Mazzanti illustrated the book's 2014 edition, published by Editions Soleil and translated by Gaid Girard. Plot summary. Le Fanu presents the story as part of the casebook of Dr. Hesselius, whose departures from medical orthodoxy rank him as the first occult detective in literature. Laura, the woman protagonist, narrates, beginning with her childhood in a "picturesque and solitary" castle amid an extensive forest in Styria, where she lives with her father, a wealthy English widower retired from service to the Austrian Empire. When she was six, Laura had a vision of a very beautiful visitor in her bedchamber. She later claims to have been punctured in her breast, although no wound was found. All the household assure Laura that it was just a dream, but they step up security as well and there is no subsequent vision or visitation. Twelve years later, Laura and her father are admiring the sunset in front of the castle when her father tells her of a letter from his friend, General Spielsdorf. The General was supposed to visit them with his niece, Bertha Rheinfeldt, but Bertha suddenly died under mysterious circumstances. The General ambiguously concludes that he will discuss the circumstances in detail when they meet later. Laura, saddened by the loss of a potential friend, longs for a companion. A carriage accident outside Laura's home unexpectedly brings a girl of Laura's age into the family's care. Her name is Carmilla. Both girls instantly recognise each other from the "dream" they both had when they were young. Carmilla appears injured after her carriage accident, but her mysterious mother informs Laura's father that her journey is urgent and cannot be delayed. She arranges to leave her daughter with Laura and her father until she can return in three months. Before she leaves, she sternly notes that her daughter will not disclose any information whatsoever about her family, her past, or herself, and that Carmilla is of sound mind. Laura comments that this information seems needless to say, and her father laughs it off. Carmilla and Laura grow to be very close friends, but occasionally Carmilla's mood abruptly changes. She sometimes makes romantic advances towards Laura. Carmilla refuses to tell anything about herself, despite questioning by Laura. Her secrecy is not the only mysterious thing about Carmilla; she never joins the household in its prayers, she sleeps much of the day, and she seems to sleepwalk outside at night. Meanwhile, young women and girls in the nearby towns have begun dying from an unknown malady. When the funeral procession of one such victim passes by the two girls, Laura joins in the funeral hymn. Carmilla bursts out in rage and scolds Laura, complaining that the hymn hurts her ears. When a shipment of restored heirloom paintings arrives, Laura finds a portrait of her ancestor, Countess Mircalla Karnstein, dated 1698. The portrait resembles Carmilla exactly, down to the mole on her neck. Carmilla suggests that she might be descended from the Karnsteins, though the family died out centuries before. During Carmilla's stay, Laura has nightmares of a large, cat-like beast entering her room. The beast springs onto the bed and Laura feels something like two needles, an inch or two apart, darting deep into her breast. The beast then takes the form of a female figure and disappears through the door without opening it. In another nightmare, Laura hears a voice say, "Your mother warns you to beware of the assassin," and a sudden light reveals Carmilla standing at the foot of her bed, her nightdress drenched in blood. Laura's health declines, and her father has a doctor examine her. He finds a small, blue spot, an inch or two below her collar, where the creature in her dream bit her, and speaks privately with her father, only asking that Laura never be unattended. Her father sets out with Laura, in a carriage, for the ruined village of Karnstein, three miles distant. They leave a message behind asking Carmilla and one of the governesses to follow once the perpetually late-sleeping Carmilla awakes. En route to Karnstein, Laura and her father encounter General Spielsdorf. He tells them his own ghastly story. At a costume ball, Spielsdorf and his niece Bertha had met a very beautiful young woman named Millarca and her enigmatic mother. Bertha was immediately taken with Millarca. The mother convinced the General that she was an old friend of his and asked that Millarca be allowed to stay with them for three weeks while she attended to a secret matter of great importance. Bertha fell mysteriously ill, suffering the same symptoms as Laura. After consulting with a specially ordered priestly doctor, the General realised that Bertha was being visited by a vampire. He hid with a sword and waited until a large, black creature of undefined shape crawled onto his niece's bed and spread itself onto her throat. He leapt from his hiding place and attacked the creature, which had then taken the form of Millarca. She fled through the locked door, unharmed. Bertha died before the morning dawned. Upon arriving at Karnstein, the General asks a woodman where he can find the tomb of Mircalla Karnstein. The woodman says the tomb was relocated long ago by a Moravian nobleman who vanquished the vampires haunting the region. While the General and Laura are alone in the ruined chapel, Carmilla appears. The General and Carmilla both fly into a rage upon seeing each other, and the General attacks her with an axe. Carmilla disarms the General and disappears. The General explains that Carmilla is also Millarca, both anagrams for the original name of the vampire Mircalla, Countess Karnstein. The party is joined by Baron Vordenburg, the descendant of the hero who rid the area of vampires long ago. Vordenburg, an authority on vampires, has discovered that his ancestor was romantically involved with the Countess Karnstein before she died. Using his forefather's notes, he locates Mircalla's hidden tomb. An imperial commission exhumes the body of Mircalla. Immersed in blood, it seems to be breathing faintly, its heart beating, its eyes open. A stake is driven through its heart, and it gives a corresponding shriek; then, the head is struck off. The body and head are burned to ashes, which are thrown into a river. Afterwards, Laura's father takes his daughter on a year-long tour through Italy to regain her health and recover from the trauma, but she never fully does. Motifs. "Carmilla" exhibits the primary characteristics of Gothic fiction. It includes a supernatural figure, a dark setting of an old castle, a mysterious atmosphere, and ominous or superstitious elements. In the novella, Le Fanu abolishes the Victorian view of women as merely useful possessions of men, relying on them and needing their constant guardianship. The male characters of the story, such as Laura's father and General Spielsdorf, are exposed as being the opposite of the putative Victorian males – helpless and unproductive. The nameless father reaches an agreement with Carmilla's mother, whereas Spielsdorf cannot control the faith of his niece, Bertha. Both of these scenes portray women as equal, if not superior to men. This female empowerment is even more clear if we consider Carmilla's vampiric predecessors and their relationship with their prey. Carmilla is the opposite of those male vampires – she is actually involved with her victims both emotionally and (theoretically) sexually. Moreover, she is able to exceed even more limitations by dominating death. In the end, her immortality is suggested to be sustained by the river where her ashes had been scattered. Le Fanu also departs from the negative idea of female parasitism and lesbianism by depicting a mutual and irresistible connection between Carmilla and Laura. The latter, along with other female characters, becomes a symbol of all Victorian women – restrained and judged for their emotional reflexes. The ambiguity of Laura's speech and behaviour reveals her struggles with being fully expressive of her concerns and desires. Another important element of "Carmilla" is the concept of dualism presented through the juxtaposition of vampire and human, as well as lesbian and heterosexual. It is also vivid in Laura's irresolution, since she "feels both attraction and repulsion" towards Carmilla. The duality of Carmilla's character is suggested by her human attributes, the lack of predatory demeanour, and her shared experience with Laura. According to Gabriella Jönsson, Carmilla can be seen as a representation of the dark side of all mankind. Sources. As with "Dracula", critics have looked for the sources used in the writing of "Carmilla". One source used was from a dissertation on magic, vampires, and the apparitions of spirits written by Dom Augustin Calmet entitled "Traité sur les apparitions des esprits et sur les vampires ou les revenants de Hongrie, de Moravie, &c." (1751). This is evidenced by a report analysed by Calmet, from a priest who learned information of a town being tormented by a vampiric entity three years earlier. Having travelled to the town to investigate and collecting information of the various inhabitants there, the priest learned that a vampire had tormented many of the inhabitants at night by coming from the nearby cemetery and would haunt many of the residents on their beds. An unknown Hungarian traveller came to the town during this period and helped the town by setting a trap at the cemetery and decapitating the vampire that resided there, curing the town of their torment. This story was retold by Le Fanu and adapted into the thirteenth chapter of "Carmilla". According to Matthew Gibson, the Reverend Sabine Baring-Gould's "The Book of Were-wolves" (1863) and his account of Elizabeth Báthory, Coleridge's "Christabel" (Part 1, 1797 and Part 2, 1800), and Captain Basil Hall's "Schloss Hainfeld; or a Winter in Lower Styria" (London and Edinburgh, 1836) are other sources for Le Fanu's "Carmilla". Hall's account provides much of the Styrian background and, in particular, a model for both Carmilla and Laura in the figure of Jane Anne Cranstoun, Countess Purgstall. Influence. Carmilla, the title character, is the original prototype for a legion of female and lesbian vampires. Although Le Fanu portrays his vampire's sexuality with the circumspection that one would expect for his time, lesbian attraction evidently is the main dynamic between Carmilla and the narrator of the story: When compared to other literary vampires of the 19th century, Carmilla is a similar product of a culture with strict sexual mores and tangible religious fear. While Carmilla selected exclusively female victims, she only becomes emotionally involved with a few. Carmilla had nocturnal habits, but was not confined to the darkness. She had unearthly beauty, and was able to change her form and to pass through solid walls. Her animal alter ego was a monstrous black cat, not a large dog as in "Dracula". She did, however, sleep in a coffin. "Carmilla" works as a Gothic horror story because her victims are portrayed as succumbing to a perverse and unholy temptation that has severe metaphysical consequences for them. Some critics, among them William Veeder, suggest that "Carmilla", notably in its outlandish use of narrative frames, was an important influence on Henry James' "The Turn of the Screw" (1898). Bram Stoker's "Dracula". Le Fanu's work has been noted as an influence on Bram Stoker's masterwork of the genre, "Dracula": Censorship. In April 2025, the Lukashenko regime added the book to the List of printed publications containing information messages and materials, the distribution of which could harm the national interests of Belarus.
6922
42195518
https://en.wikipedia.org/wiki?curid=6922
Clitoridectomy
Clitoridectomy or clitorectomy is the surgical removal, reduction, or partial removal of the clitoris. It is rarely used as a therapeutic medical procedure, such as when cancer has developed in or spread to the clitoris. Commonly, non-medical removal of the clitoris is performed during female genital mutilation. Medical uses. Malignancies. A clitoridectomy is often done to remove malignancy or necrosis of the clitoris. This is sometimes done along with a radical complete vulvectomy. Surgery may also become necessary due to therapeutic radiation treatments to the pelvic area. Removal of the clitoris may be due to malignancy or trauma. Clitoromegaly and other conditions. Female infants born with a 46,XX genotype but have a clitoris size affected by congenital adrenal hyperplasia and are treated surgically with vaginoplasty that often reduces the size of the clitoris without its total removal. The atypical size of the clitoris is due to an endocrine imbalance in utero. Other reasons for the surgery include issues involving microphallism and those who have Müllerian agenesis. Treatments on children raise human rights concerns. Technique. Clitoridectomy surgical techniques are used to remove an invasive malignancy that extends to the clitoris. Standard surgical procedures are followed in these cases. This includes evaluation and biopsy. Other factors that will affect the technique selected are age, other existing medical conditions, and obesity. Other considerations are the probability of extended hospital care and the development of infection at the surgical site. The surgery proceeds with the use of general anesthesia, and prior to the vulvectomy/clitoridectomy an inguinal lymphadenectomy is first done. The extent of the surgical site extends beyond the boundaries of malignancy. Superficial lymph nodes may also need to be removed. If the malignancy is present in any muscles in the region, then the affected muscle tissue is also removed. In some cases, the surgeon is able to preserve the clitoris despite extensive malignancy. The cancerous tissue is removed and the incision is closed. Post-operative care may employ the use of suction drainage to allow the deeper tissues to heal toward the surface. Follow-up after surgery includes the stripping of the drainage device to prevent blockage. A typical hospital stay can last up to two weeks. The site of the surgery is left unbandaged to allow for frequent examination. Complications can include the development of lymphedema; not removing the saphenous vein during the surgery can help prevent this. In some instances, the buildup of fluid can be reduced through methods such as foot elevation, diuretic medication, and wearing compression stockings. In a clitoridectomy for infants with a clitoromegaly, the clitoris is often reduced instead of removed. The surgeon cuts the shaft of the elongated phallus and sews the glans and preserved nerves back onto the stump. In a less common surgery called clitoral recession, the surgeon hides the clitoral shaft under a fold of skin so only the glans remains visible. Society and culture. General. While much feminist scholarship has described clitoridectomy as a practice aimed at controlling women's sexuality, the historic emergence of the practice in ancient European and Middle Eastern cultures may also have derived from ideas about what a normal female genitalia should look like and the policing of boundaries between the sexes. In the seventeenth century, anatomists remained divided on whether a clitoris was a normal female organ, with some arguing that it was an abnormality in female development and, if large enough to be visible, it should always be removed at birth. In the 19th century, a clitoridectomy was thought by some to curb female masturbation; until the late 19th century, masturbation was thought by many to be unhealthy or immoral. Isaac Baker Brown (1812–1873), an English gynaecologist who was president of the Medical Society of London believed that the "unnatural irritation" of the clitoris caused epilepsy, hysteria, and mania, and he worked "to remove [it] whenever he had the opportunity of doing so", according to his obituary in the "Medical Times and Gazette". Peter Lewis Allen writes that Brown's views caused outrage, and he died penniless after being expelled from the Obstetrical Society. Occasionally, in American and English medicine of the nineteenth century, circumcision was done as a cure for insanity. Some believed that mental and emotional disorders were related to female reproductive organs and that removing the clitoris would cure the neurosis. This treatment was discontinued in 1867. Aesthetics may determine clitoral norms. A lack of ambiguity of the genitalia is seen as necessary in the assignment of a sex to infants and therefore whether a child's genitalia is normal, but what is considered ambiguous or normal can vary from person to person. Sexual behavior is another reason for clitoridectomies. Author Sarah Rodriguez stated that the history of medical textbooks has indirectly created accepted ideas about the female body. Medical and gynecological textbooks are also at fault in the way that the clitoris is described in comparison to a male's penis. The importance and originality of a female's clitoris is underscored because it is seen as "a less significant organ, since anatomy texts compared the penis and the clitoris in only one direction." Rodriguez said that a male's penis created the framework of the sexual organ. Not all historical examples of clitoral surgeries should be assumed to be clitoridectomy (removal of the clitoris). In the nineteen thirties, the French psychoanalyst Marie Bonaparte studied African clitoral surgical practices and showed that these often involved removal of the clitoral hood, not the clitoris. She also had a surgery done to her own clitoris by the Viennese surgeon Dr Halban, which entailed cutting the suspensory ligament of the clitoris to permit it to sit closer to her vaginal opening. These sorts of clitoral surgeries, contrary to reducing women's sexual pleasure, actually appear aimed at making coitus more pleasurable for women, though it is unclear if that is ever their actual outcome. Human rights concerns. Clitoridectomies are the most common form of female genital mutilation. The World Health Organization (WHO) estimates that clitoridectomies have been performed on 200 million girls and women that are currently alive. The regions that most clitoridectomies take place are Asia, the Middle East and west, north and east Africa. The practice also exists in migrants originating from these regions. Most of the surgeries are for cultural or religious reasons. Clitoridectomy of people with conditions such as congenital adrenal hyperplasia that cause a clitoromegaly is controversial when it takes place during childhood or under duress. Many women who were exposed to such treatment have reported loss of physical sensation in the affected area, and loss of autonomy. In recent years, multiple human rights institutions have criticized early surgical management of such characteristics. In 2013, it was disclosed in a medical journal that four unnamed elite female athletes from developing countries were subjected to gonadectomies and partial clitoridectomies after testosterone testing revealed that they had an intersex variation or disorder of sex development. In April 2016, the United Nations Special Rapporteur on health, Dainius Pūras, condemned this treatment as a form of genital mutilation "in the absence of symptoms or health issues warranting those procedures."
6924
40286053
https://en.wikipedia.org/wiki?curid=6924
Cabal
A cabal is a group of people who are united in some close design, usually to promote their private views or interests in an ideology, a state, or another community, often by and usually without the knowledge of those who are outside their group. The use of this term usually carries negative connotations of political purpose, conspiracy and secrecy. It can also refer to a secret plot or a clique, or it may be used as a verb (to form a cabal or secretly conspire). Etymology. The term "cabal" is derived from Kabbalah (a word that has numerous spelling variations), the Jewish mystical interpretation of the Hebrew scripture (קַבָּלָה). In Hebrew, it means "received doctrine" or "tradition", while in European culture (Christian Cabala, Hermetic Qabalah) it became associated with occult doctrine or a secret. It came into English via the French "cabale" from the medieval Latin "cabbala", and was known early in the 17th century through usages linked to Charles II and Oliver Cromwell. By the middle of the 17th century, it had developed further to mean some intrigue entered into by a small group and also referred to the group of people so involved, i.e. a semi-secret political clique. There is a theory that the term took on its present meaning from a group of ministers formed in 1668 – the "Cabal ministry" of King Charles II of England. Members included Sir Thomas Clifford, Lord Arlington, the Duke of Buckingham, Lord Ashley and Lord Lauderdale, whose initial letters coincidentally spelled CABAL, and who were the signatories of the public Treaty of Dover that allied England to France in a prospective war against the Netherlands, and served as a cover for the Secret Treaty of Dover. The theory that the word originated as an acronym from the names of the group of ministers is a folk etymology, although the coincidence was noted at the time and could possibly have popularized its use. Usage in the Netherlands. In Dutch, the word "kabaal", also "kabale" or "cabale," was used during the 18th century in the same way. The "Friesche Kabaal" (Frisian Cabal) denoted the Frisian pro-Orange nobility which supported the "stadholderate", but also had great influence on "stadtholders" Willem IV and Willem V and their regents, and therefore on the matters of state in the Dutch Republic. This influence came to an end when the major Frisian nobles at the court fell out of favor. The word nowadays has the meaning of noise, uproar, racket. It was derived as such from French and mentioned for the first time in 1845. Conspiratorial discourse. Followers of the QAnon conspiracy theory use "The Cabal" to refer to what is perceived as a secret worldwide elite organization who, according to proponents, wish to undermine democracy and freedom, and implement their own globalist agendas. Some anti-government movements in Australia, particularly those that emerged during Canberra's response to the pandemic, that Scott Morrison’s secret ministerial appointments were evidence of what they said was happening all along – a "secret cabal". The term is sometimes employed as an antisemitic dog whistle due to its evocation of centuries-old antisemitic tropes.
6925
194203
https://en.wikipedia.org/wiki?curid=6925
Cytochrome
Cytochromes are redox-active proteins containing a heme, with a central iron (Fe) atom at its core, as a cofactor. They are involved in the electron transport chain and redox catalysis. They are classified according to the type of heme and its mode of binding. Four varieties are recognized by the International Union of Biochemistry and Molecular Biology (IUBMB), cytochromes a, cytochromes b, cytochromes c and cytochrome d. Cytochrome function is linked to the reversible redox change from ferrous (Fe(II)) to the ferric (Fe(III)) oxidation state of the iron found in the heme core. In addition to the classification by the IUBMB into four cytochrome classes, several additional classifications such as cytochrome o and cytochrome P450 can be found in biochemical literature. History. Cytochromes were initially described in 1884 by Charles Alexander MacMunn as respiratory pigments (myohematin or histohematin). In the 1920s, Keilin rediscovered these respiratory pigments and named them the cytochromes, or “cellular pigments”. He classified these heme proteins on the basis of the position of their lowest energy absorption band in their reduced state, as cytochromes "a" (605 nm), "b" (≈565 nm), and "c" (550 nm). The ultra-violet (UV) to visible spectroscopic signatures of hemes are still used to identify heme type from the reduced bis-pyridine-ligated state, i.e., the pyridine hemochrome method. Within each class, cytochrome "a", "b", or "c", early cytochromes are numbered consecutively, e.g. cyt "c", cyt "c1", and cyt "c2", with more recent examples designated by their reduced state R-band maximum, e.g. cyt "c559". Structure and function. The heme group is a highly conjugated ring system (which allows its electrons to be very mobile) surrounding an iron ion. The iron in cytochromes usually exists in a ferrous (Fe2+) and a ferric (Fe3+) state with a ferroxo (Fe4+) state found in catalytic intermediates. Cytochromes are, thus, capable of performing electron transfer reactions and catalysis by reduction or oxidation of their heme iron. The cellular location of cytochromes depends on their function. They can be found as globular proteins and membrane proteins. In the process of oxidative phosphorylation, a globular cytochrome cc protein is involved in the electron transfer from the membrane-bound complex III to complex IV. Complex III itself is composed of several subunits, one of which is a b-type cytochrome while another one is a c-type cytochrome. Both domains are involved in electron transfer within the complex. Complex IV contains a cytochrome a/a3-domain that transfers electrons and catalyzes the reaction of oxygen to water. Photosystem II, the first protein complex in the light-dependent reactions of oxygenic photosynthesis, contains a cytochrome b subunit. Cyclooxygenase 2, an enzyme involved in inflammation, is a cytochrome b protein. In the early 1960s, a linear evolution of cytochromes was suggested by Emanuel Margoliash that led to the molecular clock hypothesis. The apparently constant evolution rate of cytochromes can be a helpful tool in trying to determine when various organisms may have diverged from a common ancestor. Types. Several kinds of cytochrome exist and can be distinguished by spectroscopy, exact structure of the heme group, inhibitor sensitivity, and reduction potential. Four types of cytochromes are distinguished by their prosthetic groups: There is no "cytochrome e," but cytochrome f, found in the cytochrome b6f complex of plants is a c-type cytochrome. In mitochondria and chloroplasts, these cytochromes are often combined in electron transport and related metabolic pathways: A distinct family of cytochromes is the cytochrome P450 family, so named for the characteristic Soret peak formed by absorbance of light at wavelengths near 450 nm when the heme iron is reduced (with sodium dithionite) and complexed to carbon monoxide. These enzymes are primarily involved in steroidogenesis and detoxification.
6927
664832
https://en.wikipedia.org/wiki?curid=6927
Crowded House
Crowded House are an Australian-New Zealand rock band, formed in Melbourne, Victoria, Australia, in 1985. Its founding members were Neil Finn (vocalist, guitarist, primary songwriter) and Paul Hester (drums), who were both former members of Split Enz, and Nick Seymour (bass). Later band members included Finn's brother Tim, who was in their former band Split Enz; sons Liam and Elroy; as well as Americans Mark Hart and Matt Sherrod. Neil Finn and Seymour are the sole constant members. Originally active from 1985 to 1996, Crowded House had consistent commercial and critical success in Australia and New Zealand. They achieved success in the United States with their self-titled debut album, which provided the Top Ten hits "Don't Dream It's Over" and "Something So Strong". Further international success came in the UK, Europe, and South Africa in the early 1990s with their third and fourth albums ("Woodface" and "Together Alone") and the compilation album "Recurring Dream", which included the hits "Fall at Your Feet", "Weather with You", "Distant Sun", "Locked Out", "Instinct", and "Not the Girl You Think You Are". Neil and Tim Finn were each awarded an OBE in June 1993 for their contributions to the music of New Zealand. Crowded House disbanded in 1996 following several farewell concerts that year, including the "Farewell to the World" concerts in Melbourne and Sydney. Hester died by suicide in 2005. A year later, the group re-formed with drummer Matt Sherrod and released two further albums ("Time on Earth" and "Intriguer"), each of which reached number one in Australia. The band went on another hiatus, and reunited in 2020 with a new line-up featuring Neil Finn, Nick Seymour, Mitchell Froom, and Finn's sons Liam and Elroy. Their most recent album, "Gravity Stairs", was released in 2024. As of 2021, Crowded House have sold over 15 million albums worldwide. In November 2016, the band was inducted into the ARIA Hall of Fame. History. Neil Finn (vocals, guitar, piano) and drummer Paul Hester (the Cheks, Deckchairs Overboard) were former members of New Zealand band Split Enz, which spent part of 1975–6 in Australia and several years in England. Neil Finn is the younger brother of Split Enz founding member Tim Finn, who joined Crowded House in 1990 on vocals, guitars, and keyboards for the album "Woodface". Bassist Nick Seymour (Plays with Marionettes, Bang, The Horla) is the younger brother of singer-songwriter and guitarist Mark Seymour of Australian rock group Hunters & Collectors. Formation and name change (1984–1986). Finn and Hester decided to form a new band during the first Split Enz farewell tour, "Enz with a Bang", in late 1984. Seymour approached Finn during the after party for the Melbourne show and asked if he could audition for the new band. The Mullanes formed in Melbourne in early 1985 with Finn, Hester, Seymour, and guitarist Craig Hooper (the Reels) and first performed on 11 June. They secured a record contract with Capitol Records, but Hooper left the band before the remaining trio moved to Los Angeles to record their debut album. At Capitol's behest, the band's name was changed to Crowded House, which alluded to the lack of space at the small Hollywood Hills house they shared during the recording of the album "Crowded House". Former Split Enz keyboardist Eddie Rayner produced the track "Can't Carry On" and was asked to join the band. He toured with them in 1988, but was unable to become a full member due to family commitments. Early albums (1986–1990). Thanks to their Split Enz connection, the newly formed Crowded House had an established Australasian fanbase. They began by playing at festivals in Australia and New Zealand and released their debut album, "Crowded House", in August 1986. Capitol Records initially failed to see the band's potential and gave them only low-key promotion, forcing the band to play at small venues to try to gain attention. The album's first single, "Mean to Me", reached the Australian Kent Music Report Singles Chart top 30 in June. It failed to chart in the US, but moderate American airplay introduced US listeners to the group. The next single, "Don't Dream It's Over", was released in October 1986 and proved an international hit, reaching number two on the US "Billboard" Hot 100 and number one in Canada. New Zealand radio stations initially gave the song little support until months later when it became internationally successful. Ultimately, the song reached number one on the New Zealand singles chart and number eight in Australia. It remains the group's most commercially successful song. In March 1987 the group were awarded "Best New Talent", along with "Song of the Year" and "Best Video" awards for "Don't Dream It's Over", at the inaugural ARIA Music Awards. The video also earned the group the MTV Video Music Award for Best New Artist that year. The song has often been covered by other artists and gave Paul Young a hit single in 1991. It was also used for a New Zealand Tourism Board advertisement in its "100% Pure New Zealand" worldwide promotion from October 2005. In May 2001, "Don't Dream it's Over" was voted seventh in a poll of the best Australian songs of all time by the Australasian Performing Right Association. In June 1987, nearly a year after its release, "Crowded House" finally reached number one on the Kent Music Report Album Charts. It also reached number three in New Zealand and number twelve in the US. The follow-up to "Don't Dream it's Over", "Something So Strong", was another global smash, reaching the Top 10 in New Zealand America, and Canada. "World Where You Live" and "Now We're Getting Somewhere" were also released as singles with chart success. As the band's primary songwriter, Neil Finn was under pressure to create a second album to match their debut and the band joked that one potential title for the new release was "Mediocre Follow-Up". Eventually titled "Temple of Low Men", their second album was released in July 1988 with strong promotion by Capitol Records. The album did not fare as well as their debut in the US, only reaching number 40 and selling around 200,000 copies, but it achieved Australasian success, reaching number one in Australia and number two in New Zealand. The first single "Better Be Home Soon" peaked at number two on both Australian and New Zealand singles charts and reached top 50 in the US, though the following four singles were less successful. Crowded House undertook a short tour of Australia and Canada to promote the album, with Eddie Rayner on keyboards. Multi-instrumentalist Mark Hart, who would eventually become a full band member, replaced Rayner in January 1989. After the tour, Finn fired Seymour from the band. Music journalist Ed Nimmervoll claimed that Seymour's temporary departure was because Finn blamed him for causing his writer's block; however, Finn cited "artistic differences" as the reason. Seymour said that after a month he contacted Finn and they agreed that he would return to the band. Early 1990s (1991–1994). Crowded House took a break after the Canadian leg of the "Temple of Low Men" tour. Neil Finn and his brother Tim recorded songs they had co-written for their own album, "Finn". Following the recording sessions with Tim, Neil began writing and recording a third Crowded House album with Hester and Seymour, but these tracks were rejected by the record company, so Neil asked Tim if Crowded House could use the "Finn" songs. Tim jokingly agreed on the proviso that he become a member, which Neil apparently took literally. With Tim as an official member, the band returned to the studio. The new tracks, as well as some from the previously rejected recordings were combined to make "Woodface", which was released in July 1991. The album features eight tracks co-written by Neil and Tim, which feature the brothers harmonising on lead vocals, except on the sombre "All I Ask" on which Tim sang lead. The track was later used on AIDS awareness commercials in Australia. Five of the album's tracks were Neil's solo compositions and two were by Hester, the exuberant "Italian Plastic", which became a crowd favourite at concerts and the hidden track "I'm Still Here". "Chocolate Cake", a humorous comment on American excesses that was not taken well by some US critics and sections of the American public was released in June 1991 as the first single. It failed to chart in the US; however, it reached number two on Billboard's Modern Rock Tracks chart. The song peaked at number seven in New Zealand and reached the top 20 in Australia. The second single, "Fall at Your Feet", was less successful in Australia and New Zealand but did at least reach the US Hot 100. The album reached number one in New Zealand, number two in Australia, number six in the UK and made the top 20 in several European countries. The third single from "Woodface", "Weather With You", peaked at No. 7 in early 1992 giving the band their highest UK chart placement. By contrast, the album had limited success in the US, only reaching number 83 on the Billboard 200 Album Chart and selling 225,000 copies. Despite the success of the album, Tim Finn left Crowded House suddenly part-way through the UK leg of the "Woodface" tour, a few hours before the band were due to play at King Tut's Club in Glasgow on 1 November 1991. Neil Finn noted that "on stage, it just didn't feel right for us or him. We're very off-the-cuff and conversational, whereas Tim is into creating a spectacle and the two approaches don't gel all that well... We'd all open our mouths at the same time and then stop and go, Oh, after you. From Tim's point of view it was quite a relief to put it all on the table, I think. For half the set he was standing there with his acoustic guitar, not really feeling part of it." Paul Hester commented that "both sides felt good about parting before it could get ugly." Performances on the UK tour, at the Town and Country Club in London, were recorded live and given a limited release in Australia, while individual songs from those shows were released as B-sides of singles in some countries. In June 1993 the New Zealand Government recommended that the Queen award an OBE to Neil and Tim Finn for their contribution to the music of New Zealand. For their fourth album, "Together Alone", Crowded House used producer Martin Glover (aka "Youth") and invited touring musician Mark Hart (guitar and keyboards) to become a permanent band member. The album was recorded at Karekare Beach, New Zealand, which gave its name to the opening track, "Kare Kare". The album was released in October 1993 and sold well internationally on the strength of lead single "Distant Sun" and followup "Private Universe". It topped the New Zealand Album Chart, reached number 2 in Australia and number 4 in the UK. "Locked Out" was the album's first US single and received airplay on MTV and VH1. This track and "My Sharona" by the Knack, which were both included on the soundtrack of the film "Reality Bites", were bundled together on a jukebox single to promote the film soundtrack. Saying farewell (1994–1996). Crowded House were midway through a US tour when Paul Hester quit the band on 15 April 1994. He flew home to Melbourne to await the birth of his first child and indicated that he required more time with his family. Wally Ingram, drummer for support act Sheryl Crow, temporarily filled in until a replacement, Peter Jones (ex-Harem Scarem, Vince Jones, Kate Ceberano's Septet) was found. After the tour, the Finn Brothers released their album "Finn" in November 1995. In June 1996, at a press conference to announce the release of their greatest hits album "Recurring Dream", Neil revealed that Crowded House were to disband. The June 1996 concerts in Europe and Canada were to be their final performances. "Recurring Dream" contained four songs from each of the band's studio albums, along with three new songs. The album debuted at number one in Australia, New Zealand and the UK in July 1996. Early copies included a bonus CD of live material. The album's three new songs, which were released as singles, were "Instinct", "Not the Girl You Think You Are" and "Everything Is Good for You", which featured backing vocals from Pearl Jam's Eddie Vedder. Paul Hester returned to the band to play drums on the three new tracks. Worried that their goodbye had been too low-key and had disregarded their home fans, the band performed the "Farewell to the World" concert on the steps of the Sydney Opera House on 24 November 1996, which raised funds for the Sydney Children's Hospital. The concert featured the line-up of Neil Finn, Nick Seymour, Mark Hart and Paul Hester. Tim Finn and Peter Jones both made guest appearances. Support bands on the day were Custard, Powderfinger and You Am I. The concert had one of the highest live audiences in Australian history with the crowd being estimated at between 120,000 and 250,000 people. "Farewell to the World" was released on VHS in December 1996. In 2007, a double CD and a DVD were issued to commemorate the concert's tenth anniversary. The DVD featured newly recorded audio commentary by Finn, Hart and Seymour and other new bonus material. Between farewell and reunion (1996–2006). Following the 1996 break-up of Crowded House, the members embarked upon a variety of projects. Neil Finn released two solo studio albums, "Try Whistling This" (1998) and "One Nil" (2001), as well as two live albums, "Sessions at West 54th" (2000) and "7 Worlds Collide" (2001). "7 Worlds Collide" saw him performing with guest musicians including Eddie Vedder, Johnny Marr, Ed O'Brien and Phil Selway of Radiohead, Tim Finn, Sebastian Steinberg, Lisa Germano and Betchadupa (featuring his son Liam Finn). A double CD and DVD of the shows were released in November 2001. Tim Finn had resumed his solo career after leaving the group in 1992 and he also worked with Neil on a second Finn Brothers album, "Everyone Is Here", which was released in 2004. Paul Hester joined The Finn Brothers on stage for three songs at their Palais Theatre show in Melbourne at the end of 2004. Nick Seymour also joined them on stage in Dublin, where he was living, in 2004. Peter Jones and Nick Seymour joined Australian group Deadstar for their second album, "Milk", in 1997. Seymour later worked as a record producer in Dublin, producing Irish group Bell X1's debut album, "Neither Am I" in 2000. Mark Hart rejoined Supertramp in the late 1990s and later toured with Ringo Starr & His All-Starr Band. In 2001 he released a solo album, "Nada Sonata". Paul Hester worked with children's entertainers the Wiggles, playing "Paul the Cook". He also had his own ABC show "Hessie's Shed" in Australia from late 1997. He formed the band Largest Living Things, which was the name rejected by Capitol Records in favour of Crowded House. It was on "Hessie's Shed" that Finn, Hester and Seymour last shared a stage, on an episode filmed as part of Finn's promotion for his solo album "Try Whistling This" in 1998. Finn and Hester performed "Not the Girl You Think You Are" with Largest Living Things, before being joined by Seymour for "Sister Madly" and a version of Paul Kelly's "Leaps and Bounds", which also featured Kelly on vocals. In late 2003, Hester hosted the series "Music Max's Sessions". Hester and Seymour were reunited when they both joined singer-songwriter Matt O'Donnell's Melbourne-based group Tarmac Adam. The band released one album, 2003's "Handheld Torch", which was produced by Seymour. In May 1999 Crowded House issued a compilation of unreleased songs, "Afterglow", which included the track "Recurring Dream", recorded when the group were still called The Mullanes and included Craig Hooper on guitar. The album's liner notes included information about the songs, written by music journalist David Hepworth. Some limited-release versions included a second CD with songwriting commentary by Finn. The liner notes confirmed that Crowded House had no plans to reunite at that time. A 2003 compilation album, "Classic Masters", was released only in the US, while 2005 saw the release of the album "She Will Have Her Way", a collection of cover versions of Crowded House, Split Enz, Tim Finn and Finn Brothers songs by Australasian female artists. The album reached the top 5 in Australia and New Zealand. On 26 March 2005 Paul Hester died by suicide in a park near his home in Melbourne. He was 46 years old. His obituary in "The Sydney Morning Herald" stated that he had fought "a long battle with depression." Following the news of Hester's death, Nick Seymour joined The Finn Brothers on stage at the Royal Albert Hall in London, where the three played in memory of Hester. A snare drum with a top hat on it stood at the front of the stage as a tribute. Writing in 2010 Neil Finn said, "When we lost Paul it was like someone pulled the rug out from underneath everything, a terrible jolt out of the dark blue. He was the best drummer I had ever played with and for many years, my closest friend." Reunion and "Time on Earth" (2006–2009). In 2006 Neil Finn asked Nick Seymour to play bass on his third solo album. Seymour agreed and the two joined up with producer and multi-instrumentalist Ethan Johns to begin recording. As the recording sessions progressed it was decided that the album would be issued under the Crowded House band name, rather than as a Neil Finn solo album. In January 2007, the group publicly announced their reformation and on 23 February, after 20 days of auditions, former Beck drummer Matt Sherrod joined Finn, Seymour and Mark Hart to complete the new line up. As Sherrod and Hart had not participated in the initial sessions, four new tracks were recorded with producer Steve Lillywhite including the album's first single "Don't Stop Now". On 17 March 2007 the band played a live show at their rehearsal studio in front of around fifty fans, friends and family. The performance was streamed live as a webcast. The two-and-a-half-hour set included some new tracks, including "Silent House" co-written by Finn with the Dixie Chicks. A concert onboard "The Thekla", moored in Bristol, followed on 19 March. Crowded House played at the Marquee Theatre in Tempe, Arizona on 26 April as a warm-up for their appearance at the Coachella Festival on 29 April in Indio, California. They played at the Australian Live Earth concert in Sydney on 7 July. The next day, Finn and Seymour were interviewed on "Rove Live" and the band, with Hart and Sherrod, performed "Don't Stop Now" to promote the new album, which was titled "Time on Earth". The single was a minor hit in Australia and the UK. The album was released worldwide in June and July. It topped the album chart in New Zealand and made number 2 in Australia and number 3 in the UK. On 6 December 2008 Crowded House played the Homebake festival in Sydney, with warm up gigs at small venues in Hobart, Melbourne and Sydney. For these shows the band were augmented by multi-instrumentalist Don McGlashan and Neil's younger son, Elroy Finn, on guitar. On 14 March 2009 the band joined Neil's older son, Liam Finn, on stage for three songs at the Sound Relief concert in Melbourne. "Intriguer", second split and Sydney Opera House shows (2009–2018). Crowded House began recording their follow-up to "Time on Earth" in April 2009, at Finn's own Roundhead Studios. The album, "Intriguer", was produced by Jim Scott who had worked on "The Sun Came Out" by Neil's 7 Worlds Collide project. In August 2009, Finn travelled to Los Angeles to record some overdubs at Jim Scott's Los Angeles studio before they began mixing tracks. The album was released in June 2010, in time for the band's appearance at the West Coast Blues & Roots Festival near Perth. Finn stated that the album contains some, "Unexpected twists and turns" and some songs that, "Sound like nothing we've done before." "Intriguer" topped the Australian album chart, reached number 3 in New Zealand and number 12 in the UK. Crowded House undertook an extensive world tour in 2010 in support of "Intriguer". This was the first album where the band regularly interacted with fans via the internet on their own re-launched website. The band sold recordings of the shows on the "Intriguer" tour on USB flash drives and made individual live tracks available for free download. A new compilation album, "The Very Very Best of Crowded House", was released in October 2010 to celebrate the band's 25th anniversary. It includes 19 of the band's greatest hits and is also available in a box set with a 25 track DVD of their music videos. A deluxe digital version, available for download only, has 32 tracks including a rare 1987 live recording of the band's version of the Hunters & Collectors song "Throw Your Arms Around Me". No mention of this album has been made on the band's official website or Twitter page, which suggests that they are not involved with its release. Following the success of the album "She Will Have Her Way" in 2005, a second album of cover versions of Finn Brothers songs (including Crowded House songs) was released on 12 November 2010. Entitled "He Will Have His Way", all tracks are performed by Australasian male artists. In November 2011 an Australian tour featured artists involved with the "She Will Have Her Way" and "He Will Have His Way" projects, including Paul Dempsey, Clare Bowditch, Seeker Lover Keeper (Sarah Blasko, Sally Seltmann and Holly Throsby), Alexander Gow (Oh Mercy) and Lior. The band played what would be their last concert for over five years at the A Day on the Green festival in Auckland on 27 February 2011. Former Crowded House drummer Peter Jones died from brain cancer on 18 May 2012, aged 49. A statement issued by the band described him as, "A warm-hearted, funny and talented man, who was a valuable member of Crowded House." In September 2015, the song "Help is Coming" from the "Afterglow" album, was released as a download and limited edition 7" single to raise money for the charity Save the Children. The B-side, "Anthem", was a previously unreleased track, recorded at the same demo session as "Help is Coming" in 1995, with vocals added in 2015. Peter Jones plays drums on both songs. The money will be used to provide shelter, water, sanitation and hygiene for refugees in Syria, Lebanon and Iraq. Neil Finn said of "Help Is Coming"..."It was always a song about refugees, even if at the time I was thinking about the immigrants setting off on ships from Europe to America, looking for a better life for their families. There is such a huge scale and urgency to the current refugee crises that barely a day goes by without some crushing image or news account to confront us. We can't be silent any more." Neil Finn confirmed in a 2016 interview with the Dutch newspaper "Volkskrant" that Crowded House had been on indefinite hiatus since the end of the "Intriguer" tour. Later that year, however, he and Seymour announced a series of concerts at the Sydney Opera House to mark the 20th anniversary of the "Farewell to the World" show (24 November 1996). The band, with the same lineup as its initial reunion and Tim Finn as guest, performed four shows between 24 and 27 November 2016. Around the same time, each of the band's 7 studio albums (including the rarities collection "Afterglow") was reissued in deluxe 2-CD format with bonus tracks including demos, live recordings, alternate mixes, b-sides and outtakes. In April 2018, Neil Finn joined Fleetwood Mac, along with Mike Campbell of Tom Petty and the Heartbreakers, as a full-time member in the wake of Lindsey Buckingham's departure from the band. Reformation, new line-up and "Dreamers Are Waiting" (2019–2023). In August 2019, Crowded House announced a reunion show at the 2020 Byron Bay Bluesfest. Shortly afterwards, Mark Hart announced that he would not be involved in the group's reunion. Finn confirmed Hart's departure on his podcast Fangradio, noting that he "love[s] Hart dearly as a friend, as a contributor and a collaborator" and that "all will be revealed... trust that good thought and good heart gets put into all of these decisions." In December 2019, Neil Finn announced that the new Crowded House line-up would consist of himself, Seymour, the band's original producer Mitchell Froom and his sons Liam and Elroy. He added that they were making a new studio album, the first since 2010's "Intriguer". Due to the COVID-19 pandemic, the band's planned 2020 concerts have had to be rescheduled to 2021, and later again to 2022. On 15 October 2020, the band released "Whatever You Want", the first single from the band in over a decade. The band also shared an accompanying music video, starring Mac DeMarco. On 17 February 2021, the band shared another single, "To the Island." The track serves as the second single to the band's seventh studio album, "Dreamers Are Waiting", which was announced on the same day for release on 4 June 2021. The band supported the single with a national tour of New Zealand in March 2021. On 19 August 2021, the band performed their single "To the Island" on CBS's "The Late Show with Stephen Colbert". On 2 December 2021, the band announced that it will be touring Australia in 2022, with 6 shows around the country, including the 2022 Bluesfest lineup. On 24 June 2022, the band played at Glastonbury Festival. In May 2023, Crowded House tour North America for the promotion of the album "Dreamers Are Waiting". "Gravity Stairs" (2024–present). In February 2024, Crowded House released "Oh Hi" from their eighth album "Gravity Stairs", which in turn was released on 31 May 2024. Crowded House would later announce a North American tour for the album and release "Teenage Summer" as the second single, before later adding both a tour of Europe in October and a New Zealand and Australian tour beginning in November. Style. Songwriting and musical influences. As the primary songwriter for the band, Neil Finn has always set the tone for the band's sound. AllMusic said that Finn "has consistently proven his knack for crafting high-quality songs that combine irresistible melodies with meticulous lyrical detail." Neil's brother Tim was an early and important musical influence. Neil first saw Tim play with Split Enz in 1972, and said "that performance and those first songs made a lasting impression on me." His mother was another significant musical influence, encouraging him to listen to a variety of genres, including Irish folk music and Māori music. She would play piano at family parties and encourage Neil and Tim to accompany her. Album covers, costumes and set design. Bassist Nick Seymour, who is also an artist, designed or co-designed all of the band's album covers and interior artwork. He also designed some of the costumes worn by the group, notably those from the cover of the group's debut album "Crowded House". Seymour collaborated with Finn and Hester on the set design of some of their early music videos, including "Don't Dream It's Over" and "Better Be Home Soon". Since the band reunited, Seymour has again designed their album covers. The majority of the covers for the band's singles were not designed by Seymour. The artwork for "Pineapple Head" was created by Reg Mombassa of Mental As Anything. For the first four albums Mombassa and Noel Crombie, who had been the main designer of Split Enz's artwork, assisted Seymour in creating sets and costumes. For the "Farewell to the World" concerts Crombie designed the set, while Mombassa and Seymour designed promotional materials and artwork. Discography. Studio albums Awards. Crowded House has won several national and international awards. In Australia, the group has won 13 ARIA Awards from 36 nominations, including the inaugural Best New Talent in 1987. The majority of their wins were for their first two albums, "Crowded House" and "Temple of Low Men". They won eight APRA Awards from eleven nominations and were nominated for "The New Zealand Silver Scroll" for "Don't Stop Now" in 2007. "Don't Dream It's Over" was named the seventh best Australian song of all time in 2001. In 1987, Crowded House won the American MTV Video Music Award for "Best New Artist" for their song "Don't Dream It's Over", which was also nominated for three other awards. In 1994, the group was named "International Group of the Year" at the BRIT Awards. In 2009, "Don't Dream It's Over" was ranked number fifty on the Triple J "Hottest 100 of All Time", voted by the Australian public. In November 2016, Crowded House was inducted into the ARIA Hall of Fame, 30 years after their formation.