id
stringlengths 2
8
| revid
stringlengths 1
10
| url
stringlengths 38
44
| title
stringlengths 1
184
| text
stringlengths 101
448k
|
---|---|---|---|---|
6250
|
18130370
|
https://en.wikipedia.org/wiki?curid=6250
|
Colorado Springs, Colorado
|
Colorado Springs is the most populous city in El Paso County, Colorado, United States, and its county seat. The city had a population of 478,961 at the 2020 census, a 15.02% increase since 2010. Colorado Springs is the second-most populous city and most extensive city in the state of Colorado, and the 40th-most-populous city in the United States. It is the principal city of the Colorado Springs metropolitan area, which had 755,105 residents in 2020, and the second-most prominent city of the Front Range Urban Corridor. It is located in east-central Colorado on Fountain Creek, south of Denver.
At , the city stands over above sea level. It is near the base of Pikes Peak, which rises above sea level on the eastern edge of the Southern Rocky Mountains. The city is the largest city north of Mexico above in elevation.
History.
The Ute, Arapaho and Cheyenne peoples were the first recorded inhabiting the area which would become Colorado Springs. Part of the territory included in the United States' 1803 Louisiana Purchase, the current city area was designated part of the 1854 Kansas Territory. In 1859, after the first local settlement was established, it became part of the Jefferson Territory on October 24 and of El Paso County on November 28. Colorado City at the Front Range confluence of Fountain and Camp creeks was "formally organized on August13, 1859" during the Pikes Peak Gold Rush. It served as the capital of the Colorado Territory from November 5, 1861, until August 14, 1862, when the capital was moved to Golden, before it was finally moved to Denver in 1867. So many immigrants from England had settled in Colorado Springs by the early 1870s that Colorado Springs was locally referred to as "Little London". In 1871 the Colorado Springs Company laid out the towns of La Font (later called Manitou Springs) and Fountain Colony, upstream and downstream respectively, of Colorado City. Within a year, Fountain Colony was renamed Colorado Springs and officially incorporated. The El Paso County seat shifted from Colorado City in 1873 to the Town of Colorado Springs. On December 1, 1880, Colorado Springs expanded northward with two annexations.
The second period of annexations was during 188990, and included Seavey's Addition, West Colorado Springs, East End, and another North End addition. In 1891 the Broadmoor Land Company built the Broadmoor suburb, which included the Broadmoor Casino, and by December 12, 1895, the city had "four Mining Exchanges and 275 mining brokers." By 1898, the city was designated into quadrants by the north-south Cascade Avenue and the east-west Washington/Pikes Peak avenues.
From 1899 to 1901 Tesla Experimental Station operated on Knob Hill, and aircraft flights to the Broadmoor's neighboring fields began in 1919. Alexander Airport north of the city opened in 1925, and in 1927 the original Colorado Springs Municipal Airport land was purchased east of the city.
The city's military presence began during World War II, beginning with Camp Carson (now the 135,000-acre Fort Carson base) that was established in 1941. During the war, the United States Army Air Forces leased land adjacent to the municipal airfield, naming it Peterson Field in December 1942.
In November 1950, Ent Air Force Base was selected as the Cold War headquarters for Air Defense Command (ADC). The former WWII Army Air Base, Peterson Field, which had been inactivated at the end of the war, was re-opened in 1951 as a U.S. Air Force base. North American Aerospace Defense Command (NORAD) was established as a hardened command and control center within the Cheyenne Mountain Complex during the Cold War.
Between 1965 and 1968, the University of Colorado Colorado Springs, Pikes Peak State College and Colorado Technical University were established in or near the city. In 1977 most of the former Ent AFB became a US Olympic training center. The Libertarian Party was founded within the city in the 1970s.
On October 1, 1981, the Broadmoor Addition, Cheyenne Canon, Ivywild, Skyway, and Stratton Meadows were annexed after the Colorado Supreme Court "overturned a district court decision that voided the annexation". Further annexations expanding the city include the Nielson Addition and Vineyard Commerce Park Annexation in September 2008.
On June 23, 2012, the Waldo Canyon fire began northwest of the city. The fire ended up destroying 347 homes and killing two people in the city. In total, over 32,000 residents had to be evacuated. At the time the fire was the most destructive in state history until it was surpassed by the Black Forest Fire the following year.
Geography.
The city lies in a semi-arid Steppe region,with the Southern Rocky Mountains to the west, the Palmer Divide to the north, high plains further east, and high desert lands to the south when leaving Fountain and approaching Pueblo. Colorado Springs is or at best one hour and five minutes south of Denver by car using I-25.
Colorado Springs has the greatest total area of any municipality in Colorado. At the 2020 United States census, the city had a total area of including of water.
Climate.
Colorado Springs has a cooler, dry-winter monsoon influenced continental climate (Köppen "Dwa"/"Cwa"), and its location just east of the Rocky Mountains affords it the rapid warming influence from chinook winds during winter but also subjects it to drastic day-to-day variability in weather conditions. The city has abundant sunshine year-round, averaging 243 sunny days per year, and receives approximately of annual precipitation. Due to unusually low precipitation for several years after flooding in 1999, Colorado Springs enacted lawn water restrictions in 2002. These were lifted in 2005 but permanently reinstated in December 2019.
Colorado Springs is one of the most active lightning strike areas in the United States. This natural phenomenon led Nikola Tesla to select Colorado Springs as the preferred location to build his lab and study electricity.
Seasonal climate.
December is typically the coldest month, averaging . Historically, January had been the coldest month, but, in recent years, December has had both lower daily maxima and minima. Typically, there are 5.2 nights with sub- lows and 23.6 days where the high does not rise above freezing.
Snowfall is usually moderate and remains on the ground briefly because of direct sun, with the city receiving per season, although the mountains to the west often receive in excess of triple that amount; March is the snowiest month in the region, both by total accumulation and number of days with measurable snowfall. In addition, 8 of the top 10 heaviest 24-hour snowfalls have occurred from March to May. Summers are warm, with July, the warmest month, averaging , and 18 days of + highs annually. Due to the high elevation and aridity, nights are usually relatively cool and rarely does the low remain above . Dry weather generally prevails, but brief afternoon thunderstorms are common, especially in July and August when the city receives the majority of its annual rainfall, due to the North American monsoon.
The first autumn freeze and the last freeze in the spring, on average, occur on October 2 and May 6, respectively; the average window for measurable snowfall (≥) is October 21 through April 25. Extreme temperatures range from on June 26, 2012 and most recently on June 21, 2016, down to on February 1, 1951, and December 9, 1919.
Demographics.
2020 census.
As of the 2020 United States census, the population of the city of Colorado Springs was 478,961 (40th most populous U.S. city), the population of the Colorado Springs Metropolitan Statistical Area was 755,105 (79th most populous MSA), and the population of the Front Range Urban Corridor was 5,055,344.
2010 census.
As of the 2010 United States census, 78.8% of the population of the city was White (non-Hispanic Whites were 70.7% of the population, compared with 86.6% in 1970), 16.1% Hispanic or Latino of any race (compared with 7.4% in 1970), 6.3% Black or African American, 3.0% Asian, 1.0% descended from indigenous peoples of the Americas, 0.3% descended from indigenous Hawaiians and other Pacific islanders, 5.5% of some other race, and 5.1% of two or more races. Mexican Americans made up 14.6% of the city's population, compared with 9.1% in 1990. The median age in the city was 35 years.
Economy.
Colorado Springs's economy is driven primarily by the military, the high-tech industry, and tourism, in that order. The city is experiencing slight growth in the service sectors. As of April 2025, the unemployment rate in Colorado Springs was 4.6%. The state's unemployment rate was 4.8% compared to 4.2% for the nation.
Military.
, there are nearly 45,000 active-duty troops in the Colorado Springs area. There are more than 100,000 veterans and thousands of reservists. The military and defense contractors supply more than 40% of the Pikes Peak region's economy.
Colorado Springs is home to the Peterson Space Force Base, Schriever Space Force Base, Cheyenne Mountain Space Force Station, U.S. Space Command, and Space Operations Command— the largest contingent of space service military installations. They are responsible for intelligence gathering, space operations, and cyber missions.
Peterson Space Force Base is responsible for the North American Aerospace Defense Command (NORAD) and the United States Northern Command (USNORTHCOM) headquarters, Space Operations Command, and Space Deltas 2, 3, and 7. Located at Peterson is the 302nd Airlift Wing, an Air Force Reserve unit, that transports passengers and cargo and fights wildfires.
Schriever Space Force Base is responsible for Joint Task Force-Space Defense and Space Deltas 6, 8, and 9. The NORAD and USNORTHCOM Alternate Command Center is located at the Cheyenne Mountain Complex. Within the mountain complex, the Cheyenne Mountain Space Force Station has been operated by Space Operations Command. On January 13, 2021, the Air Force announced a new permanent home for Space Command, moving it from Colorado Springs to Huntsville, Alabama in 2026, but the decision could be reversed by Congress.
Army divisions are trained and stationed at Fort Carson. The United States Air Force Academy was established after World War II, on land donated by the City of Colorado Springs.
Defense industry.
The defense industry forms a significant part of the Colorado Springs economy, with some of the city's largest employers being defense contractors. Some defense corporations have left or downsized city campuses, but slight growth has been recorded. Significant defense corporations in the city include Northrop Grumman, Boeing, General Dynamics, L3Harris Technologies, SAIC, ITT, Lockheed Martin, and Bluestaq. The Space Foundation is based in Colorado Springs.
High-tech industry.
A large percentage of Colorado Springs's economy is still based on manufacturing high-tech and complex electronic equipment. The high-tech sector in the Colorado Springs area has decreased its overall presence from 2000 to 2006 (from around 21,000 to around 8,000), with notable reductions in information technology and complex electronic equipment. Current trends project the high-tech employment ratio will continue to decrease.
High-tech corporations offering fibre-optics to the premises connections within the city include: Lumen Technologies, Comcast and other providers as of 2023. Hewlett-Packard still has some sales, support, and SAN storage engineering center for the computer industry. Storage Networking Industry Association is the home of the SNIA Technology Center. Keysight Technologies, spun off in 2014 from Agilent, which was itself spun off from HP in 1999 as an independent, publicly traded company, has its oscilloscope research and development division based in Colorado Springs. Intel had 250 employees in 2009. The Intel facility is now used for the centralized unemployment offices, social services, El Paso county offices, and a bitcoin mining facility. Microchip Technology (formerly Atmel), is a chip fabrication organization. The Apple Inc. facility was sold to Sanmina-SCI in 1996.
Arts and culture.
Tourism.
Almost immediately following the arrival of railroads beginning in 1871, the city's location at the base of Pikes Peak and the Rocky Mountains made it a popular tourism destination. Tourism is the third largest employer in the Pikes Peak region, accounting for more than 16,000 jobs. In 2018, 23 million day and overnight visitors came to the area, contributing $2.4 billion in revenue.
Colorado Springs has more than 55 attractions and activities in the area, including Garden of the Gods park, United States Air Force Academy, the ANA Money Museum, Cheyenne Mountain Zoo, Colorado Springs Fine Arts Center at Colorado College, Old Colorado City, The National Museum of World War II Aviation, and the U.S. Olympic & Paralympic Training Center. In 2020, the United States Olympic & Paralympic Museum opened; the Flying W Ranch Chuckwagon Dinner & Western Show reopened in 2020. A new Pikes Peak Summit Complex opened at the summit in 2021. The Manitou and Pikes Peak Railway also reopened in 2021.
The downtown Colorado Springs Visitor Information Center offers free area information to leisure and business travelers. The Cultural Office of the Pikes Peak Region (COPPeR), also downtown, supports and advocates for the arts throughout the Pikes Peak Region. It operates the PeakRadar website to communicate city events.
Annual cultural events.
Colorado Springs is home to the annual Colorado Springs Labor Day Lift Off, a hot air balloon festival that takes place over Labor Day weekend at the city's Memorial Park.
Other annual events include: a comic book convention and science fiction convention called GalaxyFest in February, a pride parade called PrideFest in July, the Greek Festival, the Pikes Peak Ascent and Marathon, and the Steers & Beers Whiskey and Beer Festival in August, and the Emma Crawford Coffin Races and Festival in nearby Manitou Springs and Arts Month in October.
The Colorado Springs Festival of Lights Parade is held the first Saturday in December. The parade is held on Tejon Street in Downtown Colorado Springs.
Breweries.
In 2017, Colorado had the third-most craft breweries at 348. Breweries and microbreweries have become popular in Colorado Springs, which hosts over 30 of them.
Religious institutions.
Although houses of worship of almost every major world religion are within the city, Colorado Springs has in particular attracted a large influx of Evangelical Christians and Christian organizations in recent years. At one time Colorado Springs was the national headquarters for 81 different religious organizations, earning the city the tongue-in-cheek nicknames "the Evangelical Vatican" and "The Christian Mecca".
Religious groups with regional or international headquarters in Colorado Springs include:
Marijuana.
Although Colorado voters approved Colorado Amendment 64, a constitutional amendment in 2012 legalizing retail sales of marijuana for recreational purposes, the Colorado Springs city council voted not to permit retail shops in the city, as was allowed in the amendment. Medical marijuana outlets continue to operate in Colorado Springs. In 2015, there were 91 medical marijuana clinics in the city, which reported sales of $59.6 million in 2014, up 11 percent from the previous year but without recreational marijuana shops. On April 26, 2016, Colorado Springs city council decided to extend the current six-month moratorium to eighteen months with no new licenses to be granted until May 2017. A scholarly paper suggested the city would give up $25.4 million in tax revenue and fees if the city continued to thwart the industry from opening within the city limits. As of March 1, 2018, there were 131 medical marijuana centers and no recreational cannabis stores. As of 2019 Colorado Springs is still one of seven towns that have only allowed for medical marijuana.
In popular culture.
Colorado Springs has been the subject of or setting for many books, films and television shows, and is a frequent backdrop for political thrillers and military-themed stories because of its many military installations and vital importance to the United States' continental defense. Notable television series using the city as a setting include "Dr. Quinn, Medicine Woman", "Homicide Hunter" and the "Stargate" series "Stargate SG-1", as well as the films "WarGames", "The Prestige", and "BlacKkKlansman".
In a North Korean propaganda video released in April 2013, Colorado Springs was singled out as one of four targets for a missile strike. The video failed to pinpoint Colorado Springs on the map, instead showing a spot somewhere in Louisiana.
Sports.
Olympic sports.
Colorado Springs, dubbed Olympic City USA, is home to the United States Olympic & Paralympic Training Center and the headquarters of the United States Olympic & Paralympic Committee and the United States Anti-Doping Agency.
Further, over 50 national sports organizations (non-Olympic) headquarter in Colorado Springs. These include the National Strength and Conditioning Association, Sports Incubator, a various non-Olympic Sports (such as USA Ultimate), and more.
Colorado Springs and Denver hosted the 1962 World Ice Hockey Championships.
The city has a long association with the sport of figure skating, having hosted the U.S. Figure Skating Championships six times and the World Figure Skating Championships five times. It is home to the World Figure Skating Museum and Hall of Fame and the Broadmoor Skating Club, a notable training center for the sport. In recent years, the Broadmoor World Arena has hosted skating events such as Skate America and the Four Continents Figure Skating Championships.
Baseball.
Colorado Springs is home to a professional baseball team, the Rocky Mountain Vibes, who are a member of the Pioneer League, an MLB Partner League.
Pikes Peak International Hill Climb.
The Pikes Peak International Hill Climb (PPIHC), also known as "The Race to the Clouds," is an annual invitational automobile and motorcycle hill climb to the summit of Pikes Peak, every year on the last Sunday of June. The highway wasn't completely paved until 2011.
Local collegiate teams.
The local colleges feature many sports teams. Notable among them are several nationally competitive NCAA Division I teams: United States Air Force Academy (Falcons) Football, Basketball and Hockey and Colorado College (Tigers) Hockey, and Women's Soccer.
Rodeo.
Colorado Springs was the original headquarters of the Professional Bull Riders (PBR) from its founding in 1992 until 2005, when the organization was moved to Pueblo.
Parks and recreation.
The city's Parks, Recreation and Cultural Services manage 136 neighborhood parks, eight community parks, seven regional parks, and five sports complexes, totaling . They also manage of trails, of which are park trails and are urban. There are of open space in 48 open-space areas.
Parks.
Garden of the Gods is on Colorado Springs's western edge. It is a National Natural Landmark, with red/orange sandstone rock formations often viewed against a backdrop of the snow-capped Pikes Peak. This park is free to the public and offers many recreational opportunities, such as hiking, rock climbing, cycling, horseback riding and tours. It offers a variety of annual events, one of the most popular of which is the Starlight Spectacular, a recreational bike ride held every summer to benefit the Trails and Open Space Coalition of Colorado Springs.
Colorado Springs has several major city parks, such as Palmer Park, America the Beautiful Park in downtown, Memorial Park, which includes many sports fields, an indoor swimming pool and skating rink, a skateboard bowl and two half-pipes, and Monument Valley Park, which has walking and biking paths, an outdoor swimming pool and pickleball courts. Monument Valley Park also has Tahama Spring, the original spring in Colorado Springs. Austin Bluffs Park affords a place of recreation in eastern Colorado Springs. El Paso County Regional Parks include Bear Creek Regional Park, Bear Creek Dog Park, Fox Run Regional Park and Fountain Creek Regional Park and Nature Center. Ponderosa pine ("Pinus ponderosa"), Gambel oak ("Quercus gambelii"), narrowleaf yucca ("Yucca angustissima", syn. "Yucca glauca") and prickly pear cactus ("Opuntia macrorhiza").
Trails.
Three trails, the New Santa Fe Regional Trail, Pikes Peak Greenway and Fountain Creek Regional Trail, form a continuous path from Palmer Lake, through Colorado Springs, to Fountain, Colorado. The majority of the trail between Palmer Lake and Fountain is a soft surface breeze gravel trail. A major segment of the trail within the Colorado Springs city limits is paved. The trails, except Monument Valley Park trails, may be used for equestrian traffic. Motorized vehicles are not allowed on the trails. Many of the trails are interconnected, having main spine trails, like the Pikes Peak Greenway, that lead to secondary trails.
Government.
On November 2, 2010, Colorado Springs voters adopted a council-strong mayor form of government. The City of Colorado Springs transitioned to the new system of government in 2011. Under the council-strong mayor system of government, the mayor is the chief executive and the city council is the legislative branch. The mayor is a full-time elected position and not a member of the council. The council has nine members, six of whom represent one of six equally populated districts each. The remaining three members are elected at-large.
Colorado Springs City Hall was built from 1902 to 1904 on land donated by W. S. Stratton.
City council.
The Colorado Springs City Council consists of nine elected officials, six of whom represent districts and three of whom represent the city at-large. Randy Helms is council president.
Politics.
In 2017, Caleb Hannan wrote in "Politico" that Colorado Springs was "staunchly Republican", "a right-wing counterweight to liberal Boulder", and that a study ranked it "the fourth most conservative city in America". In 2016, Hannan wrote that downtown Colorado Springs had a different political vibe from the overall area's and that there were "superficial signs of changing demographics". Since 2020, Colorado Springs has continued to shift towards the political center. In 2022, Governor Jared Polis won the city in his bid for reelection. In the 2023 mayoral election, independent candidate Yemi Mobolade handily won the race and became the first elected non-Republican mayor of the city.
Education.
Primary and secondary education.
Public schools
The public education in the city is divided into several school districts:
Private schools
In addition the state of Colorado runs the Colorado School for the Deaf and Blind, a residential school for people up to age 21 and established in 1874, in the city.
Higher education.
State institutions offering bachelors and graduate degree programs in Colorado Springs include the University of Colorado Colorado Springs (UCCS) with more than 12,000 students and Pikes Peak State College which offers mostly two-year degree associate degrees. The United States Air Force Academy is a federal institution offering bachelor's degrees for officer candidates.
Private non-profit institutions include Colorado College established in 1874 with about 2,000 undergraduates. Colorado Christian University has its Colorado Springs Center in the city.
Private for-profit institutions include Colorado Technical University whose main campus is in Colorado Springs and IntelliTec College a technical training school.
Transportation.
Roads.
I-25 runs north and south through Colorado, and traverses the city for nearly , entering the city south of Circle Drive and exiting north of North Gate Boulevard. In El Paso County it is known as Ronald Reagan Highway. An Interstate 25 bypass was approved in 2010.
A number of state and U.S. highways serve the city. State Highway 21 is a major east side semi-expressway from Black Forest to Fountain, known locally and co-signed as Powers Boulevard. State Highway 83 runs north–south from central Denver to northern Colorado Springs. State Highway 94 runs east–west from western Cheyenne County to eastern Colorado Springs where it terminates at US 24. US 24 is a major route through the city and county, providing access to Woodland Park via Ute Pass to the west and downtown, Nob Hill and numerous suburbs to the east. It is co-signed with Platte Ave after SH 21 and originally carried local traffic through town. The Martin Luther King Jr Bypass runs from I-25 near Circle Drive along Fountain Blvd to SH 21, then east again. State Highway 115 begins in Cañon City, traveling north along the western edge of Fort Carson; when it reaches the city limits it merges with Nevada Avenue, a signed Business Route of US 85. US 85 and SH 115 are concurrent between Lake Avenue and I-25. US 85 enters the city at Fountain and was signed at Venetucci Blvd, Lake Avenue, and Nevada Avenue at various points in history; however most of US 85 is concurrent with I-25 and is not signed.
In 2004, the voters of Colorado Springs and El Paso County established the Pikes Peak Rural Transportation Authority.
Airport.
Colorado Springs Airport (COS; ICAO: KCOS) has been in operation since 1925. It is the second-largest commercial airport in the state, after Denver International Airport (DEN; ICAO: KDEN). It covers of land at an elevation of approximately . COS is considered to be a joint-use civilian and military airport, as Peterson Space Force Base is a tenant of the airport. It has three paved runways: 17L/35R is , the runway 17R/35L is and the runway 13/31 is . The airport handled 2,134,618 passengers in 2022, and is served by American, Avelo, Delta, Southwest, Sun Country, and United.
Railroads.
Freight service is provided by Union Pacific and BNSF.
Once an important hub, the city was once served by four Class 1 railroads, as well as a number of smaller operators, some of which were narrow gauge, and an extensive streetcar system, the Colorado Springs and Interurban Railway.
Currently there is no intercity passenger service; the last remaining services connecting the Front Range cities ceased with the formation of Amtrak in 1971. Front Range Passenger Rail is a current proposal (as of 2023) to link the cities from Pueblo in the south, north to Fort Collins and possibly Cheyenne, Wyoming.
Bicycling.
As of 2017, Colorado Springs has of bike lanes and of paved trails. PikeRide is a local electric bike-share program that operates in urban core, Old Colorado City, and Manitou Springs.
In April 2018, the Colorado Springs City Council approved a Bike Master Plan. The vision of the city's Bike Master Plan is "a healthy and vibrant Colorado Springs where bicycling is one of many transportation options for a large portion of the population, and where a well-connected and well-maintained network of urban trails, single-track, and on-street infrastructure offers a bicycling experience for present and future generations that is safe, convenient, and fun for getting around, getting in shape, or getting away."
Bike lanes in Colorado Springs have not been deployed without controversy. According to "The Gazette", their readers "have mixed feelings for new bike lanes." In December 2016, the City removed a bike lane along Research Parkway due to overwhelming opposition; an online survey found that 80.5% of respondents opposed the bike lane. "The Gazette" has stated that since the Bike Master Plan was adopted by city council, "no issue has elicited more argument in "The Gazette" pages," and due to this immense public interest, on February 25, 2019, "The Gazette" hosted a town hall meeting called "Battle of the Bike Lanes".
Walkability.
A 2011 study by Walk Score ranked Colorado Springs 34th most walkable of fifty largest U.S. cities.
Buses.
Mountain Metropolitan Transit (commonly referred to as MMT) is the primary public transportation provider for the Colorado Springs metropolitan region. MMT operates thirty-four bus routes, providing service for Colorado Springs, Manitou Springs, and Security-Widefield. The Downtown Terminal is the system's main hub, with the Citadel Mall, PPSC, and Chapel Hills Mall acting as secondary transfer stations.
Mountain Metro Mobility is an Americans with Disabilities Act (ADA) federally mandated complementary ADA paratransit service, which provides demand-response service for individuals with mobility needs that prevent them from using the fixed-route bus system.
Intercity bus service is available through the state-ran Bustang service and Greyhound. Bustang runs frequent trips to Denver, and daily trips to Lamar via Pueblo.
Neighborhoods and historic places.
See also
Sister cities.
Colorado Springs' sister cities are:
Colorado Springs's sister city organization began when it became partners with Fujiyoshida. The "torii" gate erected to commemorate the relationship stands at the corner of Bijou Street and Nevada Avenue, and is one of the city's most recognizable landmarks. The "torii" gate, crisscrossed bridge and shrine, in the median between Platte and Bijou Streets downtown, were a gift to Colorado Springs, erected in 1966 by the Rotary Club of Colorado Springs to celebrate the friendship between the two communities. A plaque near the "torii" gate states that "the purpose of the sister city relationship is to promote understanding between the people of our two countries and cities". The Fujiyoshida Student exchange program has become an annual event.
In 2006 and 2010, the Bankstown TAP (Talent Advancement Program) performed with the Youth Symphony and the Colorado Springs Children's Chorale as part of the annual "In Harmony" program. A notable similarity between Colorado Springs and its sister cities is their geographic positions: three of the seven cities are near the foot of a major mountain or mountain range, as is Colorado Springs.
|
6251
|
1301054316
|
https://en.wikipedia.org/wiki?curid=6251
|
Professional certification
|
Professional certification, trade certification, or professional designation, often called simply "certification" or "qualification", is a designation earned by a person to assure qualification to perform a job or task. Not all certifications that use post-nominal letters are an acknowledgement of educational achievement, or an agency appointed to safeguard the public interest.
Overview.
A certification is a third-party attestation of an individual's level of knowledge or proficiency in a certain industry or profession. They are granted by authorities in the field, such as professional societies and universities, or by private certificate-granting agencies. Most certifications are time-limited; some expire after a period of time (e.g., the lifetime of a product that requires certification for use), while others can be renewed indefinitely as long as certain requirements are met. Renewal usually requires ongoing education to remain up-to-date on advancements in the field, evidenced by earning the specified number of continuing education credits (CECs), or continuing education units (CEUs), from approved professional development courses.
Many certification programs are affiliated with professional associations, trade organizations, or private vendors interested in raising industry standards. Certification programs are often created or endorsed by professional associations, but are typically completely independent from membership organizations. Certifications are very common in fields such as aviation, construction, technology, environment, and other industrial sectors, as well as healthcare, business, real estate, and finance.
According to "The Guide to National Professional Certification Programs" (1997) by Phillip Barnhart, "certifications are portable, since they do not depend on one company's definition of a certain job" and they provide potential employers with "an impartial, third-party endorsement of an individual's professional knowledge and experience".
Certification is different from professional licensure. In the United States, licenses are typically issued by state agencies, whereas certifications are usually awarded by professional societies or educational institutes. Obtaining a certificate is voluntary in some fields, but in others, certification from a government-accredited agency may be legally required to perform certain jobs or tasks. In other countries, licenses are typically granted by professional societies or universities and require a certificate after about three to five years and so on thereafter. The assessment process for certification may be more comprehensive than that of licensure, though sometimes the assessment process is very similar or even the same, despite differing in terms of legal status.
The American National Standards Institute (ANSI) defines the standard for being a certifying agency as meeting the following two requirements:
The Institute for Credentialing Excellence (ICE) is a U.S.-based organization that sets standards for the accreditation of personnel certification and certificate programs based on the "Standards for Educational and Psychological Testing", a joint publication of the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME). Many members of the Association of Test Publishers (ATP) are also certification organizations.
Categorization.
There are three general types of certification. Listed in order of development level and portability, they are: corporate (internal), product-specific, and profession-wide.
Corporate, or "internal" certifications, are made by a corporation or low-stakes organization for internal purposes. For example, a corporation might require a one-day training course for all sales personnel, after which they receive a certificate. While this certificate has limited portability – to other corporations, for example – it is the most simple to develop.
Product-specific certifications are more involved, and are intended to be referenced to a product across all applications. This approach is very prevalent in the information technology (IT) industry, where personnel are certified on a version of software or hardware. This type of certification is portable across locations (for example, different corporations that use that software), but not across other products. Another example could be the certifications issued for shipping personnel, which are under international standards even for the recognition of the certification body, under the International Maritime Organization (IMO).
The most general type of certification is profession-wide. Certification in the medical profession is often offered by particular specialties. In order to apply professional standards, increase the level of practice, and protect the public, a professional organization might establish a certification. This is intended to be portable to all places a certified professional might work. Of course, this generalization increases the cost of such a program; the process to establish a legally defensible assessment of an entire profession is very extensive. An example of this is a certified public accountant (CPA), which would not be certified for just one corporation or one piece of accountancy software but for general work in the profession.
Professional certificates awarded by tertiary education providers.
Many tertiary education providers grant professional certificates as an award for the completion of an educational program. The curriculum of a professional certificate is most often in a focused subject matter. Many professional certificates have the same curriculum as master's degrees in the same subject. Many other professional certificates offer the same courses as master's degrees in the same subject, but require the student to take fewer total courses to complete the program. Some professional certificates have a curriculum that more closely resembles a baccalaureate major in the same field. The typical professional certificate program is between 200 and 300 class-hours in size. It is uncommon for a program to be larger or smaller than that. Most professional certificate programs are open enrollment, but some have admissions processes. A few universities put some of their professional certificates into a subclass they refer to as advanced professional certificates.
Advanced professional certificate.
"Advanced professional certificates" are professional credentials designed to help professionals enhance their job performance and marketability in their respective fields. In many other countries, certificates are qualifications in higher education. In the United States, a certificate may be offered by an institute of higher education. These certificates usually signify that a student has reached a standard of knowledge of a certain vocational subject. Certificate programs can be completed more quickly than associate degrees and often do not have general education requirements.
An advanced professional certificate is a result of an educational process designed for individuals. Certificates are designed for both newcomers to the industry as well as seasoned professionals. Certificates are awarded by an educational program or academic institution. Completion of a certificate program indicates completion of a course or series of courses with a specific concentration that is different from an educational degree program. Course content for an advanced certificate is set forth through a variety of sources i.e. faculty, committee, instructors, and other subject matter experts in a related field. The end goal of an advanced professional certificate is so that professionals may demonstrate knowledge of course content at the end of a set period in time.
Areas of certification.
Accountancy, auditing and finance.
There are many professional bodies for accountants and auditors throughout the world; some of them are legally recognized in their jurisdictions.
Public accountants are the accountancy and control experts that are legally certified in different jurisdictions to work in public practices, certifying accounts as statutory auditors, eventually selling advice and services to other individuals and businesses. Today, however, many work within private corporations, financial industry, and government bodies.
Accounting and external auditing.
Cf. Accountancy qualifications and regulation
Aviation.
Aviators are certified through theoretical and in-flight examinations. Requirements for certifications are quite equal in most countries and are regulated by each National Aviation Authority. The existing certificates or pilot licenses are:
Licensing in these categories require not only examinations but also a minimum number of flight hours. All categories are available for Fixed-Wing Aircraft (airplanes) and Rotatory-Wing Aircraft (helicopters). Within each category, aviators may also obtain certifications in:
Usually, aviators must be certified also in their log books for the type and model of aircraft they are allowed to fly. Currency checks as well as regular medical check-ups with a frequency of 6 months, 12 months, or 36 months, depending on the type of flying permitted, are obligatory. An aviator can fly only if holding:
In Europe, the ANSP, ATCO & ANSP technicians are certified according to EUROCONTROL Safety Regulatory Requirement (ESARRs) (according to EU regulation 2096/2005 "Common Requirements").
Communications.
In the United States, several communications certifications are conferred by the Electronics Technicians Association.
Computer technology.
Certification is often used in the professions of software engineering and information technology.
Dance.
Conferred by the International Dance Council CID at UNESCO, the International Certification of Dance Studies is awarded to students who have completed 150 hours of classes in a specific form of dance for Level 1. Another 150 hours are required for Level 2 and so on till Level 10. This is the only international certification for dance since the International Dance Council CID is the official body for all forms of dance; it is usually given in addition to local or national certificates, that is why it is colloquially called "the dancer's passport". Students cannot apply for this certification directly – they have to ask their school to apply on their behalf. This certification is awarded free of charge, there is no cost other than membership fees.
International Dance Council CID at UNESCO administers the International Certification of Dance Studies.
Electronics.
In the United States, several electronics certifications are provided by the Electronics Technicians Association.
Emergency management.
The Federal Emergency Management Agency's EMI offers credentials and training opportunities for United States citizens. Students do not have to be employed by FEMA or be federal employees for some of the programs.
Engineering.
Professional engineering is any act of planning, designing, composing, measuring, evaluating, inspecting, advising, reporting, directing or supervising, or managing any of the foregoing, that requires the application of engineering principles and that concerns the safeguarding of life, health, property, economic interests, the public interest or the environment.
Event planning.
Event planning includes budgeting, scheduling, site selection, acquiring necessary permits, coordinating transportation and parking, arranging for speakers or entertainers, arranging decor, event security, catering, coordinating with third-party vendors, and emergency plans.
Warehousing management.
A warehouse management system (WMS) is a part of the supply chain and primarily aims to control the movement and storage of materials within a warehouse and process the associated transactions, including shipping, receiving, putaway and picking. The systems also direct and optimize stock putaway based on real-time information about the status of bin utilization. A WMS monitors the progress of products through the warehouse. It involves the physical warehouse infrastructure, tracking systems, and communication between product stations.
More precisely, warehouse management involves the receipt, storage and movement of goods, (normally finished goods), to intermediate storage locations or to a final customer. In the multi-echelon model for distribution, there may be multiple levels of warehouses. This includes a central warehouse, a regional warehouses (serviced by the central warehouse) and potentially retail warehouses (serviced by the regional warehouses).
Explosive atmospheres.
IECEx covers the specialized field of explosion protection associated with the use of equipment in areas where flammable gases, liquids and combustible dusts may be present. This system provides the assurance that equipment is manufactured to meet safety standards, and that services such as installation, repair and overhaul also comply with IEC International Standards on 60079 series. The UNECE (United Nations Economic Commission for Europe), cited IECEx as one example of a practice model for the verification of conformity to IEC Standards, for European smaller countries with no certification schemes for such equipment. It published a "Common Regulatory Framework" as a suggestion for those countries implementing a certification program for the explosive atmospheres' segment.
Insurance and risk management.
In the United States, insurance professionals are licensed separately by each state. Many individuals seek one or more certifications to distinguish themselves from their peers.
Language education.
TESOL is a large field of employment with widely varying degrees of regulation. Most provision worldwide is through the state school system of each individual country, and as such, the instructors tend to be trained primary- or secondary school teachers who are native speakers of the language of their pupils, and not of English. Though native speakers of English have been working in non-English speaking countries in this capacity for years, it was not until the last twenty-five years or so that there was any widespread focus on training particularly for this field. Previously, workers in this sort of job were people engaging in backpacker tourism hoping to earn some extra travel money or well-educated professionals in other fields volunteering, or retired people. These sorts of people are certainly still to be found, but there are many who consider TESOL their main profession.
One of the problems facing these full-time teachers is the absence of an international governing body for the certification or licensure of English language teachers. However, Cambridge University and its subsidiary body UCLES are pioneers in trying to get some degree of accountability and quality control to consumers of English courses, through their CELTA and DELTA programs. Trinity College London has equivalent programs, the CertTESOL and the LTCL DipTESOL. They offer initial certificates in teaching, in which candidates are trained in language awareness and classroom techniques, and given a chance to practice teaching, after which feedback is reported. Both institutions have as a follow-up a professional diploma, usually taken after a year or two in the field. Although the initial certificate is available to anyone with a high school education, the diploma is meant to be a post-graduate qualification and can in fact be incorporated into a master's degree program.
Legal affairs.
An increasing number of attorneys are choosing to be recognized as having special expertise in certain fields of law. According to the American Bar Association, a lawyer who is a certified specialist has been recognized by an independent professional certifying organization as having an enhanced level of skill and expertise, as well as substantial involvement in an established legal specialty. These organizations require a lawyer to demonstrate special training, experience and knowledge to ensure that the lawyer's recognition is meaningful and reliable. Lawyer conduct with regard to specialty certification is regulated by the states.
Legal administrators vary in their day-to-day responsibilities and job requirements. The Association of Legal Administrators (ALA) is the credentialing body of the Certified Legal Manager (CLM) certification program. CLMs are recognized as administrators who have passed a comprehensive examination and have met other eligibility requirements.:
Logistics and transport.
Logistician is the profession in the logistics and transport sectors, including sea, air, land and rail modes. Professional qualification for logisticians usually carries post-nominal letters.
Certification granting bodies include, but are not limited to, Institute for Supply Management (ISM), Association for Operations Management (APICS), Chartered Institute of Logistics and Transport (CILT), International Society of Logistics (SOLE), Canadian Institute of Traffic and Transportation (CITT), and Allied Council for Commerce and Logistics (ACCL).
Management Consulting.
Management consulting is the practice of providing consulting services to organizations to improve their performance or in any way to assist in achieving any sort of organizational objectives.
The profession's primary certification is the "Certified Management Consultant" (CMC) designation.
Certification granting bodies are the approximately 50 Institutes of Management Consulting belonging to the International Council of Management Consulting Institutes (ICMCI).
Ministers.
Churches have their own process of who may use various religious titles. Protestant churches typically require a Masters of Divinity, accreditation by the denomination and ordination by the local church in order for a minister to become a "Reverend". Those qualifications may or may not also give government authorization to solemnize marriages.
Medicine.
Board certification is the process by which a physician in the United States documents by written, practical or computer based testing, illustrating a mastery of knowledge and skills that define a particular area of medical specialization. The American Board of Medical Specialties, a not-for-profit organization, assists 24 approved medical specialty boards in the development and use of standards in the ongoing evaluation and certification of physicians.
Medical specialty certification in the United States is a voluntary process. While medical licensure sets the minimum competency requirements to diagnose and treat patients, it is not specialty specific. Board certification demonstrates a physician's exceptional expertise in a particular specialty or sub-specialty of medical practice.
Patients, physicians, health care providers, insurers and quality organizations regard certification as an important measure of a physician's knowledge, experience and skills to provide quality health care within a given specialty.
Other professional certifications include certifications such as medical licenses, Membership of the Royal College of Physicians, Fellowship of the Royal College of Physicians and Surgeons of Canada, nursing board certification, diplomas in social work. The Commission for Certification in Geriatric Pharmacy certifies pharmacists that are knowledgeable about principles of geriatric pharmacotherapy and the provision of pharmaceutical care to the elderly. Additional certifying bodies relating to the medical field include:
Peer support.
NCPRP stands for "National Certified Peer Recovery Professional", and the NCPRP credential and exam were developed in collaboration with the International Certification Board of Recovery Professionals (ICBRP) and is currently being administered by PARfessionals.
PARfessionals is a professional organization and all of the available courses are professional development and pre-certification courses.
The NCPRP credential and exam focus primarily on the concept of peer recovery through mental health and addiction recovery. It has the main purpose of training student-candidates on how to become peer recovery professionals who can provide guidance, knowledge or assistance for individuals who have had similar experiences.
Each student-candidate must complete several key steps which include initial registration; the pre-certification review course; and all applicable sections of the official application in order to become eligible to complete the final step, which is the NCPRP certification exam.
The NCPRP credential is obtained once a participant successfully passes the NCPRP certification exam by the second attempt and is valid for five years.
Project management.
Organizations that offer various certifications include:
Public relations.
In the US, the Universal Accreditation Board, an organization composed of the Public Relations Society of America, the Agricultural Relations Council, the National School Public Relations Association, the Religious Communicators Council and other public relations professional societies, administers the Accreditation in Public Relations (APR), a voluntary certification program for public relations practitioners.
Real estate management.
The Building Owners and Managers Association and the International Facility Management Association offer professional certifications for the operation and management of commercial properties.
Sales.
Organizations offering certification include:
Criticisms.
Political commentators have criticized professional or occupational licensing, especially medical and legal licensing, for restricting the supply of services and therefore making them more expensive, often putting them out of reach of the poor.
|
6255
|
7903804
|
https://en.wikipedia.org/wiki?curid=6255
|
Carl Menger
|
Carl Menger von Wolfensgrün (; ; 28 February 1840 – 26 February 1921) was an Austrian economist who contributed to the marginal theory of value. Menger is considered the founder of the Austrian school of economics.
In building his marginalist approach, Menger rejected many established views of classical economics. He directly disputed the view of the "German school" that economic theory could be derived from history. Departing from the cost-of-production theory of value—the prevailing theory of Adam Smith, David Ricardo, and Karl Marx—Menger's subjective theory of value emphasized role of mutual agreement in deriving prices. Although he had few readers outside Vienna until late in his career, disciples including Eugen von Böhm-Bawerk and Friedrich von Wieser brought his theories into wider readership. Friedrich Hayek wrote that the Austrian school's "fundamental ideas belong fully and wholly to Carl Menger."
Menger began his career as a lawyer and business journalist, during which he saw inconsistencies between existing economic theory and how buyers reasoned. After formal training in economics, he taught at the University of Vienna from 1872 to 1903. He became a private tutor and confidant to Rudolf von Habsburg, the crown prince of Austria.
Biography.
Family and education.
Carl Menger von Wolfensgrün was born in the city of Neu-Sandez in the Kingdom of Galicia and Lodomeria, Austrian Empire, which is now Nowy Sącz in Poland. He was the son of a wealthy family of minor nobility; his father, Anton Menger, was a lawyer. His mother, Caroline Gerżabek, was the daughter of a wealthy Bohemian merchant. He had two brothers, Anton and Max, both prominent as lawyers. His son, Karl Menger, was a mathematician who taught for many years at Illinois Institute of Technology.
After attending Gymnasium, he studied law at the universities of Prague and Vienna and later received a doctorate in jurisprudence from the Jagiellonian University in Kraków. In the 1860s Menger left school and enjoyed a stint as a journalist reporting and analyzing market news, first at the "Lemberger Zeitung" in Lemberg, Austrian Galicia (now Lviv, Ukraine) and later at the in Vienna.
Career.
During the course of his newspaper work, he noticed a discrepancy between what the classical economics he was taught in school said about price determination and what real world market participants believed. In 1867, Menger began a study of political economy which culminated in 1871 with the publication of his "Principles of Economics" (), thus becoming the father of the Austrian school of economics. It was in this work that he challenged classical cost-based theories of value with his theory of marginality – that price is determined at the margin.
In 1872 Menger was enrolled into the law faculty at the University of Vienna and spent the next several years teaching finance and political economy both in seminars and lectures to a growing number of students. In 1873, he received the university's chair of economic theory at the very young age of 33.
In 1876 Menger began tutoring Archduke Rudolf von Habsburg, the crown prince of Austria, in political economy and statistics. For two years, Menger accompanied the prince during his travels, first through continental Europe and then later through the British Isles. He is also thought to have assisted the crown prince in the composition of a pamphlet, published anonymously in 1878, which was highly critical of the higher Austrian aristocracy. His association with the prince would last until Rudolf's suicide in 1889.
In 1878 Rudolf's father, Emperor Franz Joseph, appointed Menger to the chair of political economy at Vienna. The title of "Hofrat" was conferred on him, and he was appointed to the Austrian in 1900.
Dispute with the historical school.
Ensconced in his professorship, he set about refining and defending the positions he took and methods he utilized in "Principles", the result of which was the 1883 publication of "Investigations into the Method of the Social Sciences with Special Reference to Economics" (""). The book caused a firestorm of debate, during which members of the historical school of economics began to derisively call Menger and his students the "Austrian school" to emphasize their departure from mainstream German economic thought – the term was specifically used in an unfavourable review by Gustav von Schmoller.
In 1884 Menger responded with the pamphlet "The Errors of Historicism in German Economics" and launched the infamous , or methodological debate, between the historical school and the Austrian school. During this time Menger began to attract like-minded disciples who would go on to make their own mark on the field of economics, most notably Eugen von Böhm-Bawerk, and Friedrich von Wieser.
In the late 1880s, Menger was appointed to head a commission to reform the Austrian monetary system. Over the course of the next decade, he authored a plethora of articles which would revolutionize monetary theory, including "The Theory of Capital" (1888) and "Money" (1892). Largely due to his pessimism about the state of German scholarship, Menger resigned his professorship in 1903 to concentrate on study.
Economics.
Menger used his subjective theory of value to arrive at what he considered one of the most powerful insights in economics: "both sides gain from exchange." Unlike William Jevons, Menger did not believe that goods provide "utils," or units of utility. Rather, he wrote, goods are valuable because they serve various uses whose importance differs. Menger also came up with an explanation of how money develops that is still accepted by some schools of thought today.
Money.
Menger believed that gold and silver were the precious metals that were adopted as money for their unique attributes like costliness, durability, and easy preservation, making them the "most popular vehicle for hoarding as well as the goods most highly favoured in commerce." Menger showed that "their special saleableness" tended to make their bid-ask spread tighter than any other market good, which led to their adoption as a general medium of exchange and evolution in many societies as money.
|
6256
|
5661201
|
https://en.wikipedia.org/wiki?curid=6256
|
List of cartoonists
|
This is a list of cartoonists, visual artists who specialize in drawing cartoons. This list includes only notable cartoonists and is not meant to be exhaustive. Note that the word 'cartoon' only took on its modern sense after its use in Punch magazine in the 1840s - artists working earlier than that are more correctly termed 'caricaturists',
|
6258
|
1301320510
|
https://en.wikipedia.org/wiki?curid=6258
|
Civilization
|
A civilization (also spelled civilisation in British English) is any complex society characterized by the development of the state, social stratification, urbanization, and symbolic systems of communication beyond signed or spoken languages (namely, writing systems).
Civilizations are organized around densely populated settlements, divided into more or less rigid hierarchical social classes of division of labour, often with a ruling elite and a subordinate urban and rural populations, which engage in intensive agriculture, mining, small-scale manufacture and trade. Civilization concentrates power, extending human control over the rest of nature, including over other human beings. Civilizations are characterized by elaborate agriculture, architecture, infrastructure, technological advancement, currency, taxation, regulation, and specialization of labour.
Historically, a civilization has often been understood as a larger and "more advanced" culture, in implied contrast to smaller, supposedly less advanced cultures, even societies within civilizations themselves and within their histories. Generally civilization contrasts with non-centralized tribal societies, including the cultures of nomadic pastoralists, Neolithic societies, or hunter-gatherers.
The word "civilization" relates to the Latin or 'city'. As the National Geographic Society has explained it: "This is why the most basic definition of the word "civilization" is 'a society made up of cities.'"
The earliest emergence of civilizations is generally connected with the final stages of the Neolithic Revolution in West Asia, culminating in the relatively rapid process of urban revolution and state formation, a political development associated with the appearance of a governing elite.
History of the concept.
The English word "civilization" comes from the French ('civilized'), from ('civil'), related to ('citizen') and ('city'). The fundamental treatise is Norbert Elias's "The Civilizing Process" (1939), which traces social mores from medieval courtly society to the early modern period. In "The Philosophy of Civilization" (1923), Albert Schweitzer outlines two opinions: one purely material and the other material and ethical. He said that the world crisis was from humanity losing the ethical idea of civilization, "the sum total of all progress made by man in every sphere of action and from every point of view in so far as the progress helps towards the spiritual perfecting of individuals as the progress of all progress".
Related words like "civility" developed in the mid-16th century. The abstract noun "civilization", meaning "civilized condition", came in the 1760s, again from French. The first known use in French is in 1757, by Victor de Riqueti, marquis de Mirabeau, and the first use in English is attributed to Adam Ferguson, who in his 1767 "Essay on the History of Civil Society" wrote, "Not only the individual advances from infancy to manhood but the species itself from rudeness to civilisation". The word was therefore opposed to barbarism or rudeness, in the active pursuit of progress characteristic of the Age of Enlightenment.
In the late 1700s and early 1800s, during the French Revolution, "civilization" was used in the singular, never in the plural, and meant the progress of humanity as a whole. This is still the case in French. The use of "civilizations" as a countable noun was in occasional use in the 19th century, but has become much more common in the later 20th century, sometimes just meaning culture (itself in origin an uncountable noun, made countable in the context of ethnography). Only in this generalized sense does it become possible to speak of a "medieval civilization", which in Elias's sense would have been an oxymoron. Using the terms "civilization" and "culture" as equivalents are controversial and generally rejected so that for example some types of culture are not normally described as civilizations.
Already in the 18th century, civilization was not always seen as an improvement. One historically important distinction between culture and civilization is from the writings of Rousseau, particularly his work about education, "". Here, civilization, being more rational and socially driven, is not fully in accord with human nature, and "human wholeness is achievable only through the recovery of or approximation to an original discursive or pre-rational natural unity" (see noble savage). From this, a new approach was developed, especially in Germany, first by Johann Gottfried Herder and later by philosophers such as Kierkegaard and Nietzsche. This sees cultures as natural organisms, not defined by "conscious, rational, deliberative acts", but a kind of pre-rational "folk spirit". Civilization, in contrast, though more rational and more successful in material progress, is unnatural and leads to "vices of social life" such as guile, hypocrisy, envy and avarice. In World War II, Leo Strauss, having fled Germany, argued in New York that this opinion of civilization was behind Nazism and German militarism and nihilism.
Characteristics.
Social scientists such as V. Gordon Childe have named a number of traits that distinguish a civilization from other kinds of society. Civilizations have been distinguished by their means of subsistence, types of livelihood, settlement patterns, forms of government, social stratification, economic systems, literacy and other cultural traits. Andrew Nikiforuk argues that "civilizations relied on shackled human muscle. It took the energy of slaves to plant crops, clothe emperors, and build cities" and considers slavery to be a common feature of pre-modern civilizations.
All civilizations have depended on agriculture for subsistence, with the possible exception of some early civilizations in Peru which may have depended upon maritime resources. Most developed and permanent civilizations depended on cereal agriculture. The traditional "surplus model" postulates that cereal farming results in accumulated storage and a surplus of food, particularly when people use intensive agricultural techniques such as artificial fertilization, irrigation and crop rotation. It is possible but more difficult to accumulate horticultural production, and so civilizations based on horticultural gardening have been very rare. Grain surpluses have been especially important because grain can be stored for a long time.
Research from the "Journal of Political Economy" contradicts the surplus model. It postulates that horticultural gardening was more productive than cereal farming. However, only cereal farming produced civilization because of the appropriability of yearly harvest. Rural populations that could only grow cereals could be taxed allowing for a taxing elite and urban development. This also had a negative effect on rural population, increasing relative agricultural output per farmer. Farming efficiency created food surplus and sustained the food surplus through decreasing rural population growth in favour of urban growth. Suitability of highly productive roots and tubers was in fact a curse of plenty, which prevented the emergence of states and impeded economic development.
A surplus of food permits some people to do things besides producing food for a living: early civilizations included soldiers, artisans, priests and priestesses, and other people with specialized careers. A surplus of food results in a division of labour and a more diverse range of human activity, a defining trait of civilizations. However, in some places hunter-gatherers have had access to food surpluses, such as among some of the indigenous peoples of the Pacific Northwest and perhaps during the Mesolithic Natufian culture. It is possible that food surpluses and relatively large scale social organization and division of labour predates plant and animal domestication.
Civilizations have distinctly different settlement patterns from other societies. The word "civilization" is sometimes defined as "living in cities". Non-farmers tend to gather in cities to work and to trade.
Compared with other societies, civilizations have a more complex political structure, namely the state. State societies are more stratified than other societies; there is a greater difference among the social classes. The ruling class, normally concentrated in the cities, has control over much of the surplus and exercises its will through the actions of a government or bureaucracy. Morton Fried, a conflict theorist and Elman Service, an integration theorist, have classified human cultures based on political systems and social inequality. This system of classification contains four categories.
Economically, civilizations display more complex patterns of ownership and exchange than less organized societies. Living in one place allows people to accumulate more personal possessions than nomadic people. Some people also acquire landed property, or private ownership of the land. Because a percentage of people in civilizations do not grow their own food, they must trade their goods and services for food in a market system, or receive food through the levy of tribute, redistributive taxation, tariffs or tithes from the food producing segment of the population. Early human cultures functioned through a gift economy supplemented by limited barter systems. By the early Iron Age, contemporary civilizations developed money as a medium of exchange for increasingly complex transactions. In a village, the potter makes a pot for the brewer and the brewer compensates the potter by giving him a certain amount of beer. In a city, the potter may need a new roof, the roofer may need new shoes, the cobbler may need new horseshoes, the blacksmith may need a new coat and the tanner may need a new pot. These people may not be personally acquainted with one another and their needs may not occur all at the same time. A monetary system is a way of organizing these obligations to ensure that they are fulfilled. From the days of the earliest monetarized civilizations, monopolistic controls of monetary systems have benefited the social and political elites.
The transition from simpler to more complex economies does not necessarily mean an improvement in the living standards of the populace. For example, although the Middle Ages is often portrayed as an era of decline from the Roman Empire, studies have shown that the average stature of males in the Middle Ages (c. 500 to 1500 CE) was greater than it was for males during the preceding Roman Empire and the succeeding Early Modern Period (c. 1500 to 1800 CE). Also, the Plains Indians of North America in the 19th century were taller than their "civilized" American and European counterparts. The average stature of a population is a good measurement of the adequacy of its access to necessities, especially food, and its freedom from disease.
Writing, developed first by people in Sumer, is considered a hallmark of civilization and "appears to accompany the rise of complex administrative bureaucracies or the conquest state". Traders and bureaucrats relied on writing to keep accurate records. Like money, the writing was necessitated by the size of the population of a city and the complexity of its commerce among people who are not all personally acquainted with each other. However, writing is not always necessary for civilization, as shown by the Inca civilization of the Andes, which did not use writing at all but except for a complex recording system consisting of knotted strings of different lengths and colours: the "Quipus", and still functioned as a civilized society.
Aided by their division of labour and central government planning, civilizations have developed many other diverse cultural traits. These include organized religion, development in the arts, and countless new advances in science and technology.
Assessments of what level of civilization a polity has reached are based on comparisons of the relative importance of agricultural as opposed to trading or manufacturing capacities, the territorial extensions of its power, the complexity of its division of labour, and the carrying capacity of its urban centres. Secondary elements include a developed transportation system, writing, standardized measurement, currency, contractual and tort-based legal systems, art, architecture, mathematics, scientific understanding, metallurgy, political structures, and organized religion.
As a contrast with other societies.
The idea of civilization implies a progression or development from a previous "uncivilized" state. Traditionally, cultures that defined themselves as "civilized" often did so in contrast to other societies or human groupings viewed as less civilized, calling the latter barbarians, savages, and primitives. Indeed, the modern Western idea of civilization developed as a contrast to the indigenous cultures European settlers encountered during the European colonization of the Americas and Australia. The term "primitive," though once used in anthropology, has now been largely condemned by anthropologists because of its derogatory connotations and because it implies that the cultures it refers to are relics of a past time that do not change or progress.
Because of this, societies regarding themselves as "civilized" have sometimes sought to dominate and assimilate "uncivilized" cultures into a "civilized" way of living. In the 19th century, the idea of European culture as "civilized" and superior to "uncivilized" non-European cultures was fully developed, and civilization became a core part of European identity. The idea of civilization can also be used as a justification for dominating another culture and dispossessing a people of their land. For example, in Australia, British settlers justified the displacement of Indigenous Australians by observing that the land appeared uncultivated and wild, which to them reflected that the inhabitants were not civilized enough to "improve" it. The behaviours and modes of subsistence that characterize civilization have been spread by colonization, invasion, religious conversion, the extension of bureaucratic control and trade, and by the introduction of new technologies to cultures that did not previously have them. Though aspects of culture associated with civilization can be freely adopted through contact between cultures, since early modern times Eurocentric ideals of "civilization" have been widely imposed upon cultures through coercion and dominance. These ideals complemented a philosophy that assumed there were innate differences between "civilized" and "uncivilized" peoples.
Cultural identity.
"Civilization" can also refer to the culture of a complex society, not just the society itself. Every society, civilization or not, has a specific set of ideas and customs, and a certain set of manufactures and arts that make it unique. Civilizations tend to develop intricate cultures, including a state-based decision-making apparatus, a literature, professional art, architecture, organized religion and complex customs of education, coercion and control associated with maintaining the elite.
The intricate culture associated with civilization has a tendency to spread to and influence other cultures, sometimes assimilating them into the civilization, a classic example being Chinese civilization and its influence on nearby civilizations such as Korea, Japan and Vietnam Many civilizations are actually large cultural spheres containing many nations and regions. The civilization in which someone lives is that person's broadest cultural identity.
It is precisely the protection of this cultural identity that is becoming increasingly important nationally and internationally. According to international law, the United Nations and UNESCO try to set up and enforce relevant rules. The aim is to preserve the cultural heritage of humanity and also the cultural identity, especially in the case of war and armed conflict. According to Karl von Habsburg, President of Blue Shield International, the destruction of cultural assets is also part of psychological warfare. The target of the attack is often the opponent's cultural identity, which is why symbolic cultural assets become a main target. It is also intended to destroy the particularly sensitive cultural memory (museums, archives, monuments, etc.), the grown cultural diversity, and the economic basis (such as tourism) of a state, region or community.
Many historians have focused on these broad cultural spheres and have treated civilizations as discrete units. Early twentieth-century philosopher Oswald Spengler, uses the German word "Kultur", "culture", for what many call a "civilization". Spengler believed a civilization's coherence is based on a single primary cultural symbol. Cultures experience cycles of birth, life, decline, and death, often supplanted by a potent new culture, formed around a compelling new cultural symbol. Spengler states civilization is the beginning of the decline of a culture as "the most external and artificial states of which a species of developed humanity is capable".
This "unified culture" concept of civilization also influenced the theories of historian Arnold J. Toynbee in the mid-twentieth century. Toynbee explored civilization processes in his multi-volume "A Study of History", which traced the rise and, in most cases, the decline of 21 civilizations and five "arrested civilizations". Civilizations generally declined and fell, according to Toynbee, because of the failure of a "creative minority", through moral or religious decline, to meet some important challenge, rather than mere economic or environmental causes.
Samuel P. Huntington defines civilization as "the highest cultural grouping of people and the broadest level of cultural identity people have short of that which distinguishes humans from other species".
Complex systems.
Another group of theorists, making use of systems theory, looks at a civilization as a complex system, i.e., a framework by which a group of objects can be analysed that work in concert to produce some result. Civilizations can be seen as networks of cities that emerge from pre-urban cultures and are defined by the economic, political, military, diplomatic, social and cultural interactions among them. Any organization is a complex social system and a civilization is a large organization. Systems theory helps guard against superficial and misleading analogies in the study and description of civilizations.
Systems theorists look at many types of relations between cities, including economic relations, cultural exchanges and political/diplomatic/military relations. These spheres often occur on different scales. For example, trade networks were, until the nineteenth century, much larger than either cultural spheres or political spheres. Extensive trade routes, including the Silk Road through Central Asia and Indian Ocean sea routes linking the Roman Empire, Persian Empire, India and China, were well established 2000 years ago when these civilizations scarcely shared any political, diplomatic, military, or cultural relations. The first evidence of such long-distance trade is in the ancient world. During the Uruk period, Guillermo Algaze has argued that trade relations connected Egypt, Mesopotamia, Iran and Afghanistan. Resin found later in the Royal Cemetery at Ur is suggested was traded northwards from Mozambique.
Many theorists argue that the entire world has already become integrated into a single "world system", a process known as globalization. Different civilizations and societies all over the globe are economically, politically, and even culturally interdependent in many ways. There is debate over when this integration began, and what sort of integration – cultural, technological, economic, political, or military-diplomatic – is the key indicator in determining the extent of a civilization. David Wilkinson has proposed that economic and military-diplomatic integration of the Mesopotamian and Egyptian civilizations resulted in the creation of what he calls the "Central Civilization" around 1500 BCE. Central Civilization later expanded to include the entire Middle East and Europe, and then expanded to a global scale with European colonization, integrating the Americas, Australia, China and Japan by the nineteenth century. According to Wilkinson, civilizations can be culturally heterogeneous, like the Central Civilization, or homogeneous, like the Japanese civilization. What Huntington calls the "clash of civilizations" might be characterized by Wilkinson as a clash of cultural spheres within a single global civilization. Others point to the Crusading movement as the first step in globalization. The more conventional viewpoint is that networks of societies have expanded and shrunk since ancient times, and that the current globalized economy and culture is a product of recent European colonialism.
History.
The notion of human history as a succession of "civilizations" is an entirely modern one. In the European Age of Discovery, emerging Modernity was put into stark contrast with the Neolithic and Mesolithic stage of the cultures of many of the peoples they encountered. Nonetheless, developments in the Neolithic stage, such as agriculture and sedentary settlement, were critical to the development of modern conceptions of civilization.
Urban Revolution.
The Natufian culture in the Levantine corridor provides the earliest case of a Neolithic Revolution, with the planting of cereal crops attested from 11,000 BCE. The earliest neolithic technology and lifestyle were established first in Western Asia (for example at Göbekli Tepe, from about 9,130 BCE), later in the Yellow River and Yangtze basins in China (for example the Peiligang and Pengtoushan cultures), and from these cores spread across Eurasia. Mesopotamia is the site of the earliest civilizations developing from 7,400 years ago. This area has been evaluated by Beverley Milton-Edwards as having "inspired some of the most important developments in human history including the invention of the wheel, the building of the earliest cities and the development of written cursive script". Similar pre-civilized "neolithic revolutions" also began independently from 7,000 BCE in northwestern South America (the Caral-Supe civilization) and in Mesoamerica. The Black Sea area served as a cradle of European civilization. The site of Solnitsata – a prehistoric fortified (walled) stone settlement (prehistoric proto-city) (5500–4200 BCE) – is believed by some archaeologists to be the oldest known town in present-day Europe.
The 8.2 Kiloyear Arid Event and the 5.9 Kiloyear Inter-pluvial saw the drying out of semiarid regions and a major spread of deserts. This climate change shifted the cost-benefit ratio of endemic violence between communities, which saw the abandonment of unwalled village communities and the appearance of walled cities, seen by some as a characteristic of early civilizations.
This "urban revolution"—a term introduced by Childe in the 1930s—from the 4th millennium BCE, marked the beginning of the accumulation of transferable economic surpluses, which helped economies and cities develop. Urban revolutions were associated with the state monopoly of violence, the appearance of a warrior, or soldier, class and endemic warfare (a state of continual or frequent warfare), the rapid development of hierarchies, and the use of human sacrifice.
The civilized urban revolution in turn was dependent upon the development of sedentism, the domestication of grains, plants and animals, the permanence of settlements and development of lifestyles that facilitated economies of scale and accumulation of surplus production by particular social sectors. The transition from "complex cultures" to "civilizations", while still disputed, seems to be associated with the development of state structures, in which power was further monopolized by an elite ruling class who practiced human sacrifice.
Towards the end of the Neolithic period, various elitist Chalcolithic civilizations began to rise in various "cradles" from around 3600 BCE beginning with Mesopotamia, expanding into large-scale kingdoms and empires in the course of the Bronze Age (Akkadian Empire, Indus Valley Civilization, Old Kingdom of Egypt, Neo-Sumerian Empire, Middle Assyrian Empire, Babylonian Empire, Hittite Empire, and to some degree the territorial expansions of the Elamites, Hurrians, Amorites and Ebla).
Outside the Old World, development took place independently in the Pre-Columbian Americas. Urbanization in the Caral-Supe civilization in what is now coastal Peru began about 3500 BCE. In North America, the Olmec civilization emerged about 1200 BCE; the oldest known Mayan city, located in what is now Guatemala, dates to about 750 BCE. and Teotihuacan (near the modern Mexico City) was one of the largest cities in the world in 350 CE, with a population of about 125,000.
Axial Age.
The Bronze Age collapse was followed by the Iron Age around 1200 BCE, during which a number of new civilizations emerged, culminating in a period from the 8th to the 3rd century BCE which Karl Jaspers termed the Axial Age, presented as a critical transitional phase leading to classical civilization.
Modernity.
A major technological and cultural transition to modernity began approximately 1500 CE in Western Europe, and from this beginning new approaches to science and law spread rapidly around the world, incorporating earlier cultures into the technological and industrial society of the present.
Fall of civilizations.
Civilizations are traditionally understood as ending in one of two ways; either through incorporation into another expanding civilization (e.g. as Ancient Egypt was incorporated into Hellenistic Greek, and subsequently Roman civilizations), or by collapsing and reverting to a simpler form of living, as happens in so-called Dark Ages.
There have been many explanations put forward for the collapse of civilization. Some focus on historical examples, and others on general theory.
Future.
According to political scientist Samuel P. Huntington, the 21st century will be characterized by a clash of civilizations, which he believes will replace the conflicts between nation-states and ideologies that were prominent in the 19th and 20th centuries. However, this viewpoint has been strongly challenged by others such as Edward Said, Muhammed Asadi and Amartya Sen. Ronald Inglehart and Pippa Norris have argued that the "true clash of civilizations" between the Muslim world and the West is caused by the Muslim rejection of the West's more liberal sexual values, rather than a difference in political ideology, although they note that this lack of tolerance is likely to lead to an eventual rejection of (true) democracy. In "Identity and Violence" Sen questions if people should be divided along the lines of a supposed "civilization", defined by religion and culture only. He argues that this ignores the many others identities that make up people and leads to a focus on differences.
Cultural Historian Morris Berman argues in "Dark Ages America: the End of Empire" that in the corporate consumerist United States, the very factors that once propelled it to greatness―extreme individualism, territorial and economic expansion, and the pursuit of material wealth―have pushed the United States across a critical threshold where collapse is inevitable. Politically associated with over-reach, and as a result of the environmental exhaustion and polarization of wealth between rich and poor, he concludes the current system is fast arriving at a situation where continuation of the existing system saddled with huge deficits and a hollowed-out economy is physically, socially, economically and politically impossible. Although developed in much more depth, Berman's thesis is similar in some ways to that of Urban Planner, Jane Jacobs who argues that the five pillars of United States culture are in serious decay: community and family; higher education; the effective practice of science; taxation and government; and the self-regulation of the learned professions. The corrosion of these pillars, Jacobs argues, is linked to societal ills such as environmental crisis, racism and the growing gulf between rich and poor.
Cultural critic and author Derrick Jensen argues that modern civilization is directed towards the domination of the environment and humanity itself in an intrinsically harmful, unsustainable, and self-destructive fashion. Defending his definition both linguistically and historically, he defines civilization as "a culture... that both leads to and emerges from the growth of cities", with "cities" defined as "people living more or less permanently in one place in densities high enough to require the routine importation of food and other necessities of life". This need for civilizations to import ever more resources, he argues, stems from their over-exploitation and diminution of their own local resources. Therefore, civilizations inherently adopt imperialist and expansionist policies and, to maintain these, highly militarized, hierarchically structured, and coercion-based cultures and lifestyles.
The Kardashev scale classifies civilizations based on their level of technological advancement, specifically measured by the amount of energy a civilization is able to harness. The scale is only hypothetical, but it puts energy consumption in a cosmic perspective. The Kardashev scale makes provisions for civilizations far more technologically advanced than any currently known to exist.
Non-human civilizations.
The current scientific consensus is that human beings are the only animal species with the cognitive ability to create civilizations that has emerged on Earth. A recent thought experiment, the silurian hypothesis, however, considers whether it would "be possible to detect an industrial civilization in the geological record" given the paucity of geological information about eras before the quaternary.
Astronomers speculate about the existence of communicating intelligent civilizations within and beyond the Milky Way galaxy, usually using variants of the Drake equation. They conduct searches for such intelligences – such as for technological traces, called "technosignatures".<ref name="10.1016/j.actaastro.2021.02.029"></ref> The proposed proto-scientific field "xenoarchaeology" is concerned with the study of artifact remains of non-human civilizations to reconstruct and interpret past lives of alien societies if such get discovered and confirmed scientifically.
|
6259
|
45042745
|
https://en.wikipedia.org/wiki?curid=6259
|
Civilization (video game)
|
Sid Meier's Civilization is a 1991 turn-based strategy 4X video game developed and published by MicroProse. The game was originally developed for MS-DOS running on a PC, and it has undergone numerous revisions for various platforms. The player is tasked with leading an entire human civilization over the course of several millennia by controlling various areas such as urban development, exploration, government, trade, research, and military. The player can control individual units and advance the exploration, conquest and settlement of the game's world. The player can also make such decisions as setting forms of government, tax rates and research priorities. The player's civilization is in competition with other computer-controlled civilizations, with which the player can enter diplomatic relationships that can either end in alliances or lead to war.
"Civilization" was designed by Sid Meier and Bruce Shelley following the successes of "Silent Service", "Sid Meier's Pirates!" and "Railroad Tycoon". "Civilization" has sold 1.5 million copies since its release and is considered one of the most influential computer games in history due to its establishment of the 4X genre. In addition to its commercial and critical success, the game has been deemed pedagogically valuable due to its presentation of historical relationships, and one of the greatest video games ever made by several publications. A multiplayer remake, Sid Meier's CivNet, was released for the PC in 1995. "Civilization" was followed by several sequels starting with "Civilization II", with similar or modified scenarios.
Gameplay.
"Civilization" is a turn-based single-player strategy game. The player takes on the role of the ruler of a civilization, starting with one (or occasionally two) settler units, and attempts to build an empire in competition with two to seven other civilizations. The following civilizations appear in the game: Americans, Aztecs, Babylonians, Chinese, Egyptians, English, French, Germans, Greeks, Indians, Mongols, Romans, Russians and Zulus.
The game requires a fair amount of micromanagement (although less than other simulation games). Along with the larger tasks of exploration, warfare and diplomacy, the player has to make decisions about where to build new cities, which improvements or units to build in each city, which advances in knowledge should be sought (and at what rate), and how to transform the land surrounding the cities for maximum benefit. From time to time the player's towns may be harassed by barbarians, units with no specific nationality and no named leader. These threats only come from huts, unclaimed land or sea, so that over time and turns of exploration, there are fewer and fewer places from which barbarians will emanate.
Before the game begins, the player chooses which historical or current civilization to play. In contrast to later games in the "Civilization" series, this is largely a cosmetic choice, affecting titles, city names, musical heralds, and color. The choice does affect their starting position on the "Play on Earth" map, and thus different resources in one's initial cities, but has no effect on starting position when starting a random world game or a customized world game. The player's choice of civilization also prevents the computer from being able to play as that civilization or the other civilization of the same color, and since computer-controlled opponents display certain traits of their civilizations this affects gameplay as well. The Aztecs are both fiercely expansionist and generally extremely wealthy, for example. Other civilizations include the Americans, the Mongols, and Romans. Each civilization is led by a famous historical figure, such as Mahatma Gandhi for India.
The scope of "Civilization" is larger than most other games. The game begins in 4000 BC, before the Bronze Age, and can last through to AD 2100 (on the easiest setting) with Space Age and "future technologies". At the start of the game there are no cities anywhere in the world: the player controls one or two settler units, which can be used to found new cities in appropriate sites (and those cities may build other settler units, which can go out and found new cities, thus expanding the empire). Settlers can also alter terrain, build improvements such as mines and irrigation, build roads to connect cities, and later in the game they can construct railroads which offer unlimited movement.
As time advances, new technologies are developed; these technologies are the primary way in which the game changes and grows. At the start, players choose from advances such as pottery, the wheel, and the alphabet to, near the end of the game, nuclear fission and spaceflight. Players can gain a large advantage if their civilization is the first to learn a particular technology (the secrets of flight, for example) and put it to use in a military or other context. Most advances give access to new units, city improvements or derivative technologies: for example, the chariot unit becomes available after the wheel is developed, and the granary building becomes available to build after pottery is developed. The whole system of advancements from beginning to end is called the technology tree, or simply the Tech tree; this concept has been adopted in many other strategy games. Since only one tech may be "researched" at any given time, the order in which technologies are chosen makes a considerable difference in the outcome of the game and generally reflects the player's preferred style of gameplay.
Players can also build "Wonders of the World" in each of the epochs of the game, subject only to obtaining the prerequisite knowledge. These wonders are important achievements of society, science, culture and defense, ranging from the Pyramids and the Great Wall in the Ancient age, to Copernicus' Observatory and Magellan's Expedition in the middle period, up to the Apollo program, the United Nations, and the Manhattan Project in the modern era. Each wonder can only be built once in the world, and requires a lot of resources to build, far more than most other city buildings or units. Wonders provide unique benefits to the controlling civilization. For example, Magellan's Expedition increases the movement rate of naval units. Wonders typically affect either the city in which they are built (for example, the Colossus), every city on the continent (for example, J.S. Bach's Cathedral), or the civilization as a whole (for example, Darwin's Voyage). Some wonders are made obsolete by new technologies.
The game can be won by conquering all other civilizations or by winning the space race by reaching the star system of Alpha Centauri.
Development.
Prior "Civilization"-named games.
British designer Francis Tresham released his "Civilization" board game in 1980 under his company Hartland Trefoil. Avalon Hill had obtained the rights to publish it in the United States in 1981.
There were at least two attempts to make a computerized version of Tresham's game prior to 1990. Danielle Bunten Berry planned to start work on the game after completing "M.U.L.E." in 1983, and again in 1985, after completing "The Seven Cities of Gold" at Electronic Arts. In 1983 Bunten and producer Joe Ybarra opted to first do "Seven Cities of Gold". The success of "Seven Cities" in 1985 in turn led to a sequel, "Heart of Africa". Bunten never returned to the idea of "Civilization". Don Daglow, designer of "Utopia", the first simulation game, began work programming a version of "Civilization" in 1987. He dropped the project, however, when he was offered an executive position at Broderbund, and never returned to the game.
Development at MicroProse.
Sid Meier and Bill Stealey co-founded MicroProse in 1982 to develop flight simulators and other military strategy video games based on Stealey's past experiences as a United States Air Force pilot. Around 1989, Meier wanted to expand his repertoire beyond these types of games, as just having finished "F-19 Stealth Fighter" (1988, 1990), he said "Everything I thought was cool about a flight simulator had gone into that game." He took to heart the success of the new god game genre, in particular "SimCity" (1989) and "Populous" (1989). Specifically with "SimCity", Meier recognized that video games could still be entertaining based on building something up. By then, Meier was not an official employee of MicroProse but worked under contract where the company paid him upfront for game development, a large payment on delivery of the game, and additional royalties on each game of his sold.
MicroProse had hired a number of Avalon Hill game designers, including Bruce Shelley. Among other works, Shelley had been responsible for adapting the railroad-based "1829" board game developed by Tresham into "". Shelley had joined MicroProse finding that the board game market was weakening in contrast to the video game market, and initially worked on "F-19 Stealth Fighter". Meier recognized Shelley's abilities and background in game design and took him on as personal assistant designer to brainstorm new game ideas. The two initially worked on ideas for "Covert Action", but had put these aside when they came up with the concepts for "Railroad Tycoon" (1990), based loosely on the "1829"/"1830" board games. "Railroad Tycoon" was generally well received at its release, but the title did not fit within the nature of flight simulators and military strategy from MicroProse's previous catalog. Meier and Shelley had started a sequel to "Railroad Tycoon" shortly after its release, but Stealey canceled the project.
One positive aspect both had taken from "Railroad Tycoon" was the idea of multiple smaller systems working together at the same time and the player having to manage them. Both Meier and Shelley recognized that the complex interactions between these systems led players to "make a lot of interesting decisions", and that ruling a whole civilization would readily work well with these underlying systems. Some time later, both discussed their love of the original "Empire" computer games, and Meier challenged Shelley to give him ten things he would change about "Empire"; Shelley provided him with twelve. Around May 1990, Meier presented Shelley with a 5-1/4" floppy disk which contained the first prototype of "Civilization" based on their past discussions and Shelley's list.
Meier described his development process as sculpting with clay. His prototype took elements from "Empire", "Railroad Tycoon", "SimCity" and the "Civilization" board game. This initial version of this game was a real-time simulation, with the player defining zones for their population to grow similar to zoning in "SimCity". Meier and Shelley went back and forth with this, with Shelley providing suggestions based on his playthrough and acting as the game's producer, and Meier coding and reworking the game to address these points, and otherwise without involvement of other MicroProse staff. During this period, Stealey and the other managers became concerned that this game did not fit MicroProse's general catalog as strategy computer games had not yet proven successful. A few months into the development, Stealey requested them to put the project on hold and complete "Covert Action", after which they could go back to their new game. Meier and Shelley completed "Covert Action" which was published in 1990.
Once "Covert Action" was released, Meier and Shelley returned to the prototype. The time away from the project allowed them to recognize that the real-time aspect was not working well, and reworked the game to become turn-based and dropped the zoning aspect, a change that Meier described as "like tossing the clay in the trash and getting a new lump". They incorporated elements of city management and military aspect from "Empire", including creating individual military units as well as settler units that replaced the functionality of the zoning approach. Meier felt adding military and combat to the game was necessary: "The game really isn't about being civilized. The competition is what makes the game fun and the players play their best. At times, you have to make the player uncomfortable for the good of the player." Meier also opted to include a technology tree that would help to open the game to many more choices to the player as it continued, creating a non-linear experience. Meier felt players would be able to use the technology tree to adopt a style of play and from which they could use technologies to barter with the other opponents. While the game relies on established recorded history, Meier admitted he did not spend much time in research, usually only to assure the proper chronology or spellings; Shelley noted that they wanted to design for fun, not accuracy, and that "Everything we needed was pretty much available in the children’s section of the library."
"Computer Gaming World" reported in 1994 that "Sid Meier has stated on numerous occasions that he emphasizes the 'fun parts' of a simulation and throws out the rest". Meier described the process as "Add another bit [of clay]—no, that went too far. Scrape it off". He eliminated the potential for any civilization to fall on its own, believing this would be punishing to the player. "Though historically accurate", Meier said, "The moment the Krakatoa volcano blew up, or the bubonic plague came marching through, all anybody wanted to do was reload from a
saved game". Meier omitted multiplayer alliances because the computer used them too effectively, causing players to think that it was cheating. He said that by contrast, minefields and minesweepers caused the computer to do "stupid things ... If you've got a feature that makes the AI look stupid, take it out. It's more important not to have stupid AI than to have good AI". Meier also omitted jets and helicopters because he thought players would not find obtaining new technologies in the endgame useful, and online multiplayer support because of the small number of online players ("if you had friends, you wouldn't need to play computer games"); he also did not believe that online play worked well with turn-based play. The game was developed for the IBM PC platform, which at the time had support for both 16-color EGA to 256-color VGA; Meier opted to support both 16-color and 256-color graphics to allow the game to run on both EGA/Tandy and VGA/MCGA systems.
"I’ve never been able to decide if it was a mistake to keep Civ isolated as long as I did", Meier wrote; while "as many eyes as possible" are beneficial during development, Meier and Shelley worked very quickly together, combining the roles of playtester, game designer, and programmer. Meier and Shelley neared the end of their development and started presenting the game to the rest of MicroProse for feedback towards publication. This process was slowed by the current vice president of development, who had taken over Meier's former position at the company. This vice president did not receive any financial bonuses for successful publication of Meier's games due to Meier's contract terms, forgoing any incentive to provide the needed resources to finish the game. The management had also expressed issue with the lack of a firm completion date, as according to Shelley, Meier would consider a game completed only when he felt he had completed it. Eventually the two got the required help for publication, with Shelley overseeing these processes and Meier making the necessary coding changes.
"One of my big rules has always been, 'double it, or cut it in half, Meier wrote. He cut the map's size in half less than a month before "Civilization" release after playtesting revealed that the previous size was too large and made for boring and repetitive gameplay. Other automated features, like city management, were modified to require more player involvement. They also eliminated a secondary branch of the technology tree with minor skills like beer brewing, and spent time reworking the existing technologies and units to make sure they felt appropriate and did not break the game. Most of the game was originally developed with art crafted by Meier, and MicroProse's art department helped to create most of the final assets, though some of Meier's original art was used. Shelley wrote out the "Civilopedia" entries for all the elements of the game and the game's large manual.
The name "Civilization" came late in the development process. MicroProse recognized at this point the 1980 "Civilization" board game may conflict with their video game, as it shared a similar theme including the technology tree. Meier had noted the board game's influence but considered it not as great as "Empire" or "SimCity", while others have noted significant differences that made the video game far different from the board game such as the non-linearity introduced by Meier's technology tree. To avoid any potential legal issues, MicroProse negotiated a license to use the "Civilization" name from Avalon Hill. The addition of Meier's name to the title was from a current practice established by Stealey to attach games like "Civilization" that diverged from MicroProse's past catalog to Meier's name, so that players that played Meier's combat simulators and recognized Meier's name would give these new games a try. This approach worked, according to Meier, and he would continue this naming scheme for other titles in the future as a type of branding.
By the time the game was completed and ready for release, Meier estimated that it had cost $170,000 in development. "Civilization" was released in September 1991. Because of the animosity that MicroProse's management had towards Meier's games, there was very little promotion of the title, though interest in the game through word-of-mouth helped to boost sales. Following the release on the IBM PC, the game was ported to other platforms; Meier and Shelley provided this code to contractors hired by MicroProse to complete the ports.
"CivNet".
"Civilization" was released with only single-player support, with the player working against multiple computer opponents. In 1991, Internet or online gaming was still in its infancy, so this option was not considered in "Civilization" release. Over the next few years, as home Internet accessibility took off, MicroProse looked to develop an online version of "Civilization". This led to the 1995 release of "Sid Meier's CivNet". "CivNet" allowed for up to seven players to play the game, with computer opponents available to obtain up to six active civilizations. Games could be played either on a turn-based mode, or in a simultaneous mode where each player took their turn at the same time and only progressing to the next turn once all players have confirmed being finished that turn. The game, in addition to better support for Windows 3.1 and Windows 95, supported connectivity through LAN, primitive Internet play, modem, and direct serial link, and included a local hotseat mode. "CivNet" also included a map editor and a "king builder" to allow a player to customize the names and looks of their civilization as seen by other players.
According to Brian Reynolds, who led the development of "Civilization II", MicroProse "sincerely believed that "CivNet" was going to be a much more important product" than the next single-player "Civilization" game that he and Jeff Briggs had started working on. Reynolds said that because their project was seen as a side effort with little risk, they were able to innovate new ideas into "Civilization II". As a net result, "CivNet" was generally overshadowed by "Civilization II" which was released in the following year.
Post-release.
"Civilization" critical success created a "golden period of MicroProse" where there was more potential for similar strategy games to succeed, according to Meier. This put stress on the company's direction and culture. Stealey wanted to continue to pursue the military-themed titles, while Meier wanted to continue his success with simulation games. Shelley left MicroProse in 1992 and joined Ensemble Studios, where he used his experience with "Civilization" to design the "Age of Empires" games. Stealey had pushed MicroProse to develop console and arcade-based versions of their games, but this put the company into debt, and Stealey eventually sold the company to Spectrum HoloByte in 1993; Spectrum HoloByte kept MicroProse as a separate company on acquisition.
Meier would continue and develop "Civilization II" along with Brian Reynolds, who served in a similar role to Shelley as design assistant, as well as help from Jeff Briggs and Douglas Kaufman. This game was released in early 1996, and is considered the first sequel of any Sid Meier game. Stealey eventually sold his shares in MicroProse and left the company, and Spectrum HoloByte opted to consolidate the two companies under the name MicroProse in 1996, eliminating numerous positions at MicroProse in the process. As a result, Meier, Briggs, and Reynolds all opted to leave the company and founded Firaxis, which by 2005 became a subsidiary of Take-Two. After a number of acquisitions and legal actions, the "Civilization" brand (both as a board game and video game) is now owned by Take-Two, and Firaxis, under Meier's oversight, continues to develop games in the "Civilization" series.
Reception.
"Civilization" has been called one of the most important strategy games of all time, and has a loyal following of fans. This high level of interest has led to the creation of a number of free and open source versions and inspired similar games by other commercial developers.
"Computer Gaming World" stated that "a new Olympian in the genre of god games has truly emerged", comparing "Civilization" importance to computer games to that of the wheel. The game was reviewed in 1992 in "Dragon" #183 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 5 out of 5 stars and commented that: ""Civilization" is one of the highest dollar-to-play-ratio entertainments we've enjoyed. The scope is enormous, the strategies border on being limitless, the excitement is genuinely high, and the experience is worth every dime of the game's purchase price."
Jim Trunzo reviewed "Civilization" in "White Wolf" #31 (May/June, 1992) and stated that ""Civilization" should have great appeal to the plotters and thinkers, those who like challenges on a global scale. 'Might makes right' addicts should stick to games less cerebral."
Jeff Koke reviewed "Civilization" in "Pyramid" #2 (July/Aug., 1993), and stated that "Ultimately, there are games that are a lot flashier than "Civilization", with cool graphics and animation, but there aren't many - or any - in my book that have the ability to absorb the player so totally and to provide an interesting, unique outcome each and every time it's played."
"Civilization" won the Origins Award in the category Best Military or Strategy Computer Game of 1991. A 1992 "Computer Gaming World" survey of wargames with modern settings gave the game five stars out of five, describing it as "more addictive than crack ... so rich and textured that the documentation is incomplete". In 1992 the magazine named it the Overall Game of the Year, in 1993 added the game to its Hall of Fame, and in 1996 chose "Civilization" as the best game of all time:
A critic for "Next Generation" judged the Super NES version to be a disappointing port, with a cumbersome menu system (particularly that the "City" and "Production" windows are on separate screens), an unintuitive button configuration, and ugly scaled down graphics. However, he gave it a positive recommendation due to the strong gameplay and strategy of the original game: "if you've never taken a crack at this game before, be prepared to lose hours, even days, trying to conquer those pesky Babylonians." Sir Garnabus of "GamePro", in contrast, was pleased with the Super NES version's interface, and said the graphics and audio are above that of a typical strategy game. He also said the game stood out among the Super NES's generally action-oriented library.
In 1996, Computer Gaming World listed it as the best game of all time. In 2000, GameSpot rated "Civilization" as the tenth most influential video game of all time. It was also ranked at fourth place on "IGN" 2000 list of the top PC games of all time. In 2004, readers of "Retro Gamer" voted it as the 29th top retro game. In 2007, it was named one of the 16 most influential games in history at a German technology and games trade show Telespiele. In Poland, it was included in the retrospective lists of the best Amiga games by Wirtualna Polska (ranked ninth) and "CHIP" (ranked fifth). In 2012, "Time" named it one of the 100 greatest video games of all time. In 1994, "PC Gamer US" named "Civilization" the second best computer game ever. The editors wrote, "The depth of strategies possible is impressive, and the look and feel of the game will keep you playing and exploring for months. Truly a remarkable title." That same year, "PC Gamer UK" named its Windows release the sixth best computer game of all time, calling it Sid Meier's "crowning glory".
On March 12, 2007, "The New York Times" reported on a list of the ten most important video games of all time, the so-called game canon, including "Civilization".
By the release of "Civilization II" in 1996, "Civilization" had sold over 850,000 copies. By 2001, sales had reached 1 million copies. Shelley stated in a 2016 interview that "Civilization" had sold 1.5 million copies.
In 2022, The Strong National Museum of Play inducted "Sid Meier’s Civilization" to its World Video Game Hall of Fame.
Legacy.
There have been several sequels to "Civilization", including "Civilization II" (1996), "Civilization III" (2001), "Civilization IV" (2005), "Civilization Revolution" (2008), "Civilization V" (2010), "Civilization VI" (2016), and "Civilization VII" in 2025. In 1994, Meier produced a similar game titled "Colonization".
"Civilization" is generally considered the first major game in the genre of "4X", with the four "X"s equating to "explore, expand, exploit, and exterminate", a term developed by Alan Emrich in promoting 1993's "Master of Orion". While other video games with the principles of 4X had been released prior to "Civilization", future 4X games would attribute some of their basic design principles to "Civilization".
A famous supposed bug in the original game - later debunked - is that a computer-controlled Gandhi, normally a highly peaceful leader, could become a nuclear warmonger if provoked. It was theorized that the game started Gandhi's "aggression value" at 1 out of a maximum 255 possible for an 8-bit unsigned integer, making a computer-controlled Gandhi tend to avoid armed conflict. However, once a civilization achieves democracy as its form of government, its leader's aggression value falls by 2. Under normal arithmetic principles, Gandhi's "1" would be reduced to "-1", but because the value is an 8-bit unsigned integer, it supposedly wraps around to "255", causing Gandhi to suddenly become the most aggressive opponent in the game.
Interviewed in 2019, developer Brian Reynolds said with "99.99% certainty" that this story was apocryphal, recalling Gandhi's coded aggression level as being no lower than other peaceful leaders in the game, and doubting that a wraparound would have had the effect described. He noted that all leaders in the game become "pretty ornery" after their acquisition of nuclear weapons, and suggested that this behaviour simply seemed more surprising and memorable when it happened to Gandhi. Meier, in his autobiography, stated "That kind of bug comes from something called unsigned characters, which are not the default in the C programming language, and not something I used for the leader traits. Brian Reynolds wrote Civ II in C++, and he didn't use them, either. We received no complaints about a Gandhi bug when either game came out, nor did we send out any revisions for one. Gandhi's military aggressiveness score remained at 1 throughout the game." He then explains the overflow error story was made up in 2012. It spread from there to a Wikia entry, then eventually to Reddit, and was picked up by news sites like Kotaku and Geek.com. The story may have originated from the fact that 2010's "Civilization V" was deliberately written with Gandhi having an affinity for nuclear weapons, added as a joke by developer Jon Shafer. The misinformation around this bug led to the meme known as "Nuclear Gandhi".
Another relic of "Civilization" was the nature of combat where a military unit from earlier civilization periods could remain in play through modern times, gaining combat bonuses due to veteran proficiency, leading to these primitive units easily beating out modern technology against all common sense, with the common example of a veteran phalanx unit able to fend off a battleship. Meier noted that this resulted from not anticipating how players would use units, expecting them to have used their forces more like a war-based board game to protect borders and maintain zones of control rather than creating "stacks of doom". Future civilization games have had many changes in combat systems to prevent such oddities, though these games do allow for such random victories.
The 1999 game "Sid Meier's Alpha Centauri" was also created by Meier and is in the same genre, but with a futuristic/space theme; many of the interface and gameplay innovations in this game eventually made their way into "Civilization III" and "IV". "Alpha Centauri" is not actually a sequel to "Civilization", despite beginning with the same event that ends "Civilization" and "Civilization II": a crewed spacecraft from Earth arrives in the Alpha Centauri star system. Firaxis' 2014 game "", although bearing the name of the main series, is a reimagining of "Alpha Centauri" running on the engine of "Civilization V".
A 1994 "Computer Gaming World" survey of space war games stated that "the lesson of this incredibly popular wargame has not been lost on the software community, and technological research popped up all over the place in 1993", citing "Spaceward Ho!" and "Master of Orion" as examples. That year MicroProse published "Master of Magic", a similar game but embedded in a medieval-fantasy setting where instead of technologies the player (a powerful wizard) develops spells, among other things. In 1999, Activision released "", a sequel of sorts to "Civilization II" but created by a completely different design team. "Call to Power" spawned a sequel in 2000, but by then Activision had sold the rights to the "Civilization" name and could only call it "Call to Power II".
An open source clone of "Civilization" has been developed under the name of "Freeciv", with the slogan "'Cause civilization should be free." This game can be configured to match the rules of either "Civilization" or "Civilization II". Another game that partially clones "Civilization" is a public domain game called "C-evo".
|
6260
|
14419815
|
https://en.wikipedia.org/wiki?curid=6260
|
Claude Debussy
|
Achille Claude Debussy (; 22 August 1862 – 25 March 1918) was a French composer. He is sometimes seen as the first Impressionist composer, although he vigorously rejected the term. He was among the most influential composers of the late 19th and early 20th centuries.
Born to a family of modest means and little cultural involvement, Debussy showed enough musical talent to be admitted at the age of ten to France's leading music college, the Conservatoire de Paris. He originally studied the piano, but found his vocation in innovative composition, despite the disapproval of the Conservatoire's conservative professors. He took many years to develop his mature style, and was nearly 40 when he achieved international fame in 1902 with the only opera he completed, "Pelléas et Mélisande".
Debussy's orchestral works include "Prélude à l'après-midi d'un faune" (1894), "Nocturnes" (1897–1899) and "Images" (1905–1912). His music was to a considerable extent a reaction against Wagner and the German musical tradition. He regarded the classical symphony as obsolete and sought an alternative in his "symphonic sketches", "La mer" (1903–1905). His piano works include sets of 24 Préludes and 12 Études. Throughout his career he wrote "mélodies" based on a wide variety of poetry, including his own. He was greatly influenced by the Symbolist poetic movement of the later 19th century. A small number of works, including the early "La Damoiselle élue" and the late "Le Martyre de saint Sébastien" have important parts for chorus. In his final years, he focused on chamber music, completing three of six planned sonatas for different combinations of instruments.
With early influences including Russian and Far Eastern music and works by Chopin, Debussy developed his own style of harmony and orchestral colouring, derided – and unsuccessfully resisted – by much of the musical establishment of the day. His works have strongly influenced a wide range of composers including Béla Bartók, Igor Stravinsky, George Gershwin, Olivier Messiaen, George Benjamin, and the jazz pianist and composer Bill Evans. Debussy died from cancer at his home in Paris at the age of 55 after a composing career of a little more than 30 years.
Life and career.
Early life.
Debussy was born on 22 August 1862 in Saint-Germain-en-Laye, Seine-et-Oise, on the north-west fringes of Paris.{{refn|Debussy's birthplace is now a museum dedicated to him. In addition to displays depicting his life and work, the building contains a small auditorium in which an annual season of concerts is given.|group= n}} He was the eldest of the five children of Manuel-Achille Debussy and his wife, Victorine, "née" Manoury. Debussy senior ran a china shop and his wife was a seamstress. The shop was unsuccessful, and closed in 1864; the family moved to Paris, first living with Victorine's mother, in Clichy, and, from 1868, in their own apartment in the Rue Saint-Honoré. Manuel worked in a printing factory.
Debussy's talents soon became evident, and in 1872, aged ten, he was admitted to the Conservatoire de Paris, where he remained a student for the next eleven years. He first joined the piano class of Antoine François Marmontel, and studied solfège with Albert Lavignac and, later, composition with Ernest Guiraud, harmony with Émile Durand, and organ with César Franck. The course included music history and theory studies with Louis-Albert Bourgault-Ducoudray, but it is not certain that Debussy, who was apt to skip classes, actually attended these.
At the Conservatoire, Debussy initially made good progress. Marmontel said of him, "A charming child, a truly artistic temperament; much can be expected of him". Another teacher was less impressed: Émile Durand wrote in a report, "Debussy would be an excellent pupil if he were less sketchy and less cavalier." A year later he described Debussy as "desperately careless". In July 1874 Debussy received the award of "deuxième accessit"{{refn|That is, fourth prize, after the "premier accessit", the runner-up ("second prix") and the winner ("premier prix").|group= n}} for his performance as soloist in the first movement of Chopin's Second Piano Concerto at the Conservatoire's annual competition. He was a fine pianist and an outstanding sight reader, who could have had a professional career had he wished, but he was only intermittently diligent in his studies. He advanced to "premier accessit" in 1875 and second prize in 1877, but failed at the competitions in 1878 and 1879. These failures made him ineligible to continue in the Conservatoire's piano classes, but he remained a student for harmony, solfège and, later, composition.
Prix de Rome.
At the end of 1880 Debussy, while continuing his studies at the Conservatoire, was engaged as accompanist for Marie Moreau-Sainti's singing class; he took this role for four years. Among the members of the class was Marie Vasnier; Debussy was greatly taken with her, and she inspired him to compose: he wrote 27 songs dedicated to her during their seven-year relationship. She was the wife of Henri Vasnier, a prominent civil servant, and much younger than her husband. She soon became Debussy's lover as well as his muse. Whether Vasnier was content to tolerate his wife's affair with the young student or was simply unaware of it is not clear, but he and Debussy remained on excellent terms, and he continued to encourage the composer in his career.
At the Conservatoire, Debussy incurred the disapproval of the faculty, particularly his composition teacher, Guiraud, for his failure to follow the orthodox rules of composition then prevailing.{{refn|The director of the Conservatoire, Ambroise Thomas, was a deeply conservative musician, as were most of his faculty. It was not until Gabriel Fauré became director in 1905 that modern music such as Debussy's or even Wagner's was accepted within the Conservatoire.|group= n}} Nevertheless, in 1884 Debussy won France's most prestigious musical award, the Prix de Rome, with his cantata "L'enfant prodigue". The Prix carried with it a residence at the Villa Medici, the French Academy in Rome, to further the winner's studies. Debussy was there from January 1885 to March 1887, with three or possibly four absences of several weeks when he returned to France, chiefly to see Marie Vasnier.
Initially Debussy found the artistic atmosphere of the Villa Medici stifling, the company boorish, the food bad, and the accommodation "abominable". Neither did he delight in Italian opera, as he found the operas of Donizetti and Verdi not to his taste. He was much more impressed by the music of the 16th-century composers Palestrina and Lassus, which he heard at Santa Maria dell'Anima: "The only church music I will accept". He was often depressed and unable to compose, but he was inspired by Franz Liszt, who visited the students and played for them. In June 1885, Debussy wrote of his desire to follow his own way, saying, "I am sure the Institute would not approve, for, naturally it regards the path which it ordains as the only right one. But there is no help for it! I am too enamoured of my freedom, too fond of my own ideas!"
Debussy finally composed four pieces that were submitted to the Academy: the symphonic ode "Zuleima" (based on a text by Heinrich Heine); the orchestral piece "Printemps"; the cantata "La Damoiselle élue" (1887–1888), the first piece in which the stylistic features of his later music began to emerge; and the "Fantaisie" for piano and orchestra, which was heavily based on Franck's music and was eventually withdrawn by Debussy. The Academy chided him for writing music that was "bizarre, incomprehensible and unperformable". Although Debussy's works showed the influence of Jules Massenet, the latter concluded, "He is an enigma". During his years in Rome Debussy composed – not for the Academy – most of his Verlaine cycle, "Ariettes oubliées", which made little impact at the time but was successfully republished in 1903 after the composer had become well known.
Return to Paris, 1887.
A week after his return to Paris in 1887, Debussy heard the first act of Wagner's "Tristan und Isolde" at the Concerts Lamoureux, and judged it "decidedly the finest thing I know". In 1888 and 1889 he went to the annual festivals of Wagner's operas at Bayreuth. He responded positively to Wagner's sensuousness, mastery of form, and striking harmonies, and was briefly influenced by them, but, unlike some other French composers of his generation, he concluded that there was no future in attempting to adopt and develop Wagner's style. He commented in 1903 that Wagner was "a beautiful sunset that was mistaken for a dawn".
Marie Vasnier ended her liaison with Debussy soon after his final return from Rome, although they remained on good enough terms for him to dedicate to her one more song, "Mandoline", in 1890. Later in 1890 Debussy met Erik Satie, who proved a kindred spirit in his experimental approach to composition. Both were bohemians, enjoying the same café society and struggling to survive financially. In the same year Debussy began a relationship with Gabrielle (Gaby) Dupont, a tailor's daughter from Lisieux; in July 1893 they began living together.
Debussy continued to compose songs, piano pieces and other works, some of which were publicly performed, but his music made only a modest impact, although his fellow composers recognised his potential by electing him to the committee of the Société Nationale de Musique in 1893. His String Quartet was premiered by the Ysaÿe string quartet at the Société Nationale in the same year. In May 1893 Debussy attended a theatrical event that was of key importance to his later career – the premiere of Maurice Maeterlinck's play "Pelléas et Mélisande", which he immediately determined to turn into an opera. He travelled to Maeterlinck's home in Ghent in November to secure his consent to an operatic adaptation.
1894–1902: "Pelléas et Mélisande".
In February 1894 Debussy completed the first draft of Act I of his operatic version of "Pelléas et Mélisande", and for most of the year worked to complete the work. While still living with Dupont, he had an affair with the singer Thérèse Roger, and in 1894 he announced their engagement. His behaviour was widely condemned; anonymous letters circulated denouncing his treatment of both women, as well as his financial irresponsibility and debts. The engagement was broken off, and several of Debussy's friends and supporters disowned him, including Ernest Chausson, hitherto one of his strongest supporters.
In terms of musical recognition, Debussy made a step forward in December 1894, when the symphonic poem "Prélude à l'après-midi d'un faune", based on Stéphane Mallarmé's poem, was premiered at a concert of the Société Nationale. The following year he completed the first draft of "Pelléas" and began efforts to get it staged. In May 1898 he made his first contacts with André Messager and Albert Carré, respectively the musical director and general manager of the Opéra-Comique, Paris, about presenting the opera.
Debussy abandoned Dupont for her friend Marie-Rosalie Texier, known as "Lilly", whom he married in October 1899, after threatening suicide if she refused him. She was affectionate, practical, straightforward, and well liked by Debussy's friends and associates, but he became increasingly irritated by her intellectual limitations and lack of musical sensitivity. The marriage lasted barely five years.
From around 1900 Debussy's music became a focus and inspiration for an informal group of innovative young artists, poets, critics, and musicians who began meeting in Paris. They called themselves "Les Apaches" – roughly "The Hooligans" – to represent their status as "artistic outcasts". The membership was fluid, but at various times included Maurice Ravel, Ricardo Viñes, Igor Stravinsky and Manuel de Falla.{{#tag:ref|Other members were the composers Florent Schmitt, Maurice Delage and Paul Ladmirault, the poets Léon-Paul Fargue and Tristan Klingsor, the painter Paul Sordes and the critic Michel Calvocoressi.|group= n}} In the same year the first two of Debussy's three orchestral "Nocturnes" were first performed. Although they did not make any great impact with the public they were well reviewed by musicians including Paul Dukas, Alfred Bruneau and Pierre de Bréville. The complete set was given the following year.
Like many other composers of the time, Debussy supplemented his income by teaching and writing.{{refn|Saint-Saëns, Franck, Massenet, Fauré and Ravel were all known as teachers, and Fauré, Messager and Dukas were regular music critics for Parisian journals.|group= n}} For most of 1901 he had a sideline as music critic of "La Revue Blanche", adopting the pen name "Monsieur Croche". He expressed trenchant views on composers ("I hate sentimentality – his name is Camille Saint-Saëns"), institutions (on the Paris Opéra: "A stranger would take it for a railway station, and, once inside, would mistake it for a Turkish bath"), conductors ("Nikisch is a unique virtuoso, so much so that his virtuosity seems to make him forget the claims of good taste"), musical politics ("The English actually think that a musician can manage an opera house successfully!"), and audiences ("their almost drugged expression of boredom, indifference and even stupidity"). He later collected his criticisms with a view to their publication as a book; it was published posthumously as "Monsieur Croche, Antidilettante".
In January 1902 rehearsals began at the Opéra-Comique for the opening of "Pelléas et Mélisande". For three months, Debussy attended rehearsals practically every day. In February there was conflict between Maeterlinck on the one hand and Debussy, Messager and Carré on the other about the casting of Mélisande. Maeterlinck wanted his mistress, Georgette Leblanc, to sing the role, and was incensed when she was passed over in favour of the Scottish soprano Mary Garden.{{refn|Mary Garden was Messager's mistress at the time, but as far as is known she was chosen for wholly musical and dramatic reasons. She is described in the "Grove Dictionary of Music and Musicians" as "a supreme singing-actress, with uncommonly vivid powers of characterization ... and a rare subtlety of colour and phrasing."|group= n}} The opera opened on 30 April 1902, and although the first-night audience was divided between admirers and sceptics, the work quickly became a success. It made Debussy a well-known name in France and abroad; "The Times" commented that the opera had "provoked more discussion than any work of modern times, excepting, of course, those of Richard Strauss". The Apaches, led by Ravel (who attended every one of the 14 performances in the first run), were loud in their support; the conservative faculty of the Conservatoire tried in vain to stop its students from seeing the opera. The vocal score was published in early May, and the full orchestral score in 1904.
1903–1918.
In 1903 there was public recognition of Debussy's stature when he was appointed a Chevalier of the Légion d'honneur, but his social standing suffered a great blow when another turn in his private life caused a scandal the following year. One of his pupils was Raoul Bardac, son of Emma and her husband, Parisian banker Sigismond Bardac. Raoul introduced his teacher to his mother, to whom Debussy quickly became greatly attracted. She was sophisticated, a brilliant conversationalist, an accomplished singer, and relaxed about marital fidelity, having been the mistress and muse of Gabriel Fauré a few years earlier. After despatching Lilly to her parental home at Bichain in Villeneuve-la-Guyard on 15 July 1904, Debussy took Emma away, staying incognito in Jersey and then at Pourville in Normandy. He wrote to his wife on 11 August from Dieppe, telling her that their marriage was over, but still making no mention of Bardac. When he returned to Paris he set up home on his own, taking a flat in a different arrondissement. On 14 October, five days before their fifth wedding anniversary, Lilly Debussy attempted suicide, shooting herself in the chest with a revolver;{{refn|A fictionalised and melodramatic dramatisation of the affair, "La femme nue", played in Paris in 1908. A myth grew up that Lilly Debussy shot herself in the Place de la Concorde, rather than at home. That version of events is not corroborated by Debussy scholars such as Marcel Dietschy, Roger Nichols, Robert Orledge and Nigel Simeone; and no mention of the Place de la Concorde appeared in even the most sensational press coverage at the time. Another inaccurate report of the case, in "Le Figaro" in early January 1905, stated that Lilly had made a second attempt at suicide.|group= n}} she survived, although the bullet remained lodged in her vertebrae for the rest of her life. The ensuing scandal caused Bardac's family to disown her, and Debussy lost many good friends including Dukas and Messager. His relations with Ravel, never close, were exacerbated when the latter joined other former friends of Debussy in contributing to a fund to support the deserted Lilly.
The Bardacs divorced in May 1905. Finding the hostility in Paris intolerable, Debussy and Emma (now pregnant) went to England. They stayed at the Grand Hotel, Eastbourne in July and August, where Debussy corrected the proofs of his symphonic sketches" La mer", celebrating his divorce on 2 August. After a brief visit to London, the couple returned to Paris in September, buying a house in a courtyard development off the Avenue du Bois de Boulogne (now Avenue Foch), Debussy's home for the rest of his life.
In October 1905 "La mer", Debussy's most substantial orchestral work, was premiered in Paris by the Orchestre Lamoureux under the direction of Camille Chevillard; the reception was mixed. Some praised the work, but Pierre Lalo, critic of "Le Temps", hitherto an admirer of Debussy, wrote, "I do not hear, I do not see, I do not smell the sea".{{refn|Lalo objected to what he felt was the artificiality of the piece: "a reproduction of nature; a wonderfully refined, ingenious and carefully composed reproduction, but a reproduction none the less". Another Parisian critic, Louis Schneider, wrote, "The audience seemed rather disappointed: they expected the ocean, something big, something colossal, but they were served instead with some agitated water in a saucer."|group= n}} In the same month the composer's only child was born at their home. Claude-Emma, affectionately known as "Chouchou", was a musical inspiration to the composer (she was the dedicatee of his "Children's Corner" suite). She outlived her father by scarcely a year, succumbing to the diphtheria epidemic of 1919. Mary Garden said, "I honestly don't know if Debussy ever loved anybody really. He loved his music – and perhaps himself. I think he was wrapped up in his genius", but biographers are agreed that whatever his relations with lovers and friends, Debussy was devoted to his daughter.
Debussy and Emma Bardac eventually married in 1908, their troubled union enduring for the rest of his life. The following year began well, when at Fauré's invitation, Debussy became a member of the governing council of the Conservatoire. His success in London was consolidated in April 1909, when he conducted "Prélude à l'après-midi d'un faune" and the "Nocturnes" at the Queen's Hall; in May he was present at the first London production of "Pelléas et Mélisande", at Covent Garden. In the same year, Debussy was diagnosed with colorectal cancer, from which he was to die nine years later.
Debussy's works began to feature increasingly in concert programmes at home and overseas. In 1910 Gustav Mahler conducted the "Nocturnes" and "Prélude à l'après-midi d'un faune" in New York in successive months. In the same year, visiting Budapest, Debussy commented that his works were better known there than in Paris. In 1912 Sergei Diaghilev commissioned a new ballet score, "Jeux". That, and the three "Images", premiered the following year, were the composer's last orchestral works. "Jeux" was unfortunate in its timing: two weeks after the premiere, in March 1913, Diaghilev presented the first performance of Stravinsky's "The Rite of Spring", a sensational event that monopolised discussion in musical circles, and effectively sidelined "Jeux" along with Fauré's "Pénélope", which had opened a week before.
In 1915 Debussy underwent one of the earliest colostomy operations. It achieved only a temporary respite, and occasioned him considerable frustration ("There are mornings when the effort of dressing seems like one of the twelve labours of Hercules"). He also had a fierce enemy at this period in the form of Camille Saint-Saëns, who in a letter to Fauré condemned Debussy's "En blanc et noir": "It's incredible, and the door of the Institut [de France] must at all costs be barred against a man capable of such atrocities". Saint-Saëns had been a member of the Institut since 1881: Debussy never became one. His health continued to decline; he gave his final concert on 14 September 1917 and became bedridden in early 1918.
Debussy died of colon cancer on 25 March 1918 at his home, aged 55. The First World War was still raging and Paris was under German aerial and artillery bombardment. The military situation did not permit the honour of a public funeral with ceremonious graveside orations. The funeral procession made its way through deserted streets to a temporary grave at Père Lachaise Cemetery as the German guns bombarded the city. Debussy's body was reinterred the following year in the small Passy Cemetery sequestered behind the Trocadéro, fulfilling his wish to rest "among the trees and the birds"; his wife and daughter are buried with him.
Works.
In a survey of Debussy's oeuvre shortly after the composer's death, the critic Ernest Newman wrote, "It would be hardly too much to say that Debussy spent a third of his life in the discovery of himself, a third in the free and happy realisation of himself, and the final third in the partial, painful loss of himself". Later commentators have rated some of the late works more highly than Newman and other contemporaries did, but much of the music for which Debussy is best known is from the middle years of his career.
The analyst David Cox wrote in 1974 that Debussy, admiring Wagner's attempts to combine all the creative arts, "created a new, instinctive, dreamlike world of music, lyrical and pantheistic, contemplative and objective – a kind of art, in fact, which seemed to reach out into all aspects of experience". In 1988 the composer and scholar Wilfrid Mellers wrote of Debussy:
Debussy did not give his works opus numbers, apart from his String Quartet, Op. 10 in G minor (also the only work where the composer's title included a key). His works were catalogued and indexed by the musicologist François Lesure in 1977 (revised in 2003) and their Lesure number ("L" followed by a number) is sometimes used as a suffix to their title in concert programmes and recordings.
Early works, 1879–1892.
filename=Clair de lune (Claude Debussy) Suite bergamasque.ogg|title="Clair de Lune" (5:04)|description=Composed in 1890, performed by Laurens Goedhart in 2011|
Debussy's musical development was slow, and as a student he was adept enough to produce for his teachers at the Conservatoire works that would conform to their conservative precepts. His friend Georges Jean-Aubry commented that Debussy "admirably imitated Massenet's melodic turns of phrase" in the cantata "L'enfant prodigue" (1884) which won him the Prix de Rome. A more characteristically Debussian work from his early years is "La Damoiselle élue", recasting the traditional form for oratorios and cantatas, using a chamber orchestra and a small body of choral tone and using new or long-neglected scales and harmonies. His early "mélodies", inspired by Marie Vasnier, are more virtuosic in character than his later works in the genre, with extensive wordless "vocalise"; from the "Ariettes oubliées" (1885–1887) onwards he developed a more restrained style. He wrote his own poems for the "Proses lyriques" (1892–1893) but, in the view of the musical scholar Robert Orledge, "his literary talents were not on a par with his musical imagination".
The musicologist Jacques-Gabriel Prod'homme wrote that, together with "La Demoiselle élue", the "Ariettes oubliées" and the "Cinq poèmes de Charles Baudelaire" (1889) show "the new, strange way which the young musician will hereafter follow". Newman concurred: "There is a good deal of Wagner, especially of "Tristan", in the idiom. But the work as a whole is distinctive, and the first in which we get a hint of the Debussy we were to know later – the lover of vague outlines, of half-lights, of mysterious consonances and dissonances of colour, the apostle of languor, the exclusivist in thought and in style." During the next few years Debussy developed his personal style, without, at this stage, breaking sharply away from French musical traditions. Much of his music from this period is on a small scale, such as the "Two Arabesques", "Valse romantique", "Suite bergamasque", and the first set of "Fêtes galantes". Newman remarked that, like Chopin, the Debussy of this period appears as a liberator from Germanic styles of composition – offering instead "an exquisite, pellucid style" capable of conveying "not only gaiety and whimsicality but emotion of a deeper sort". In a 2004 study, Mark DeVoto comments that Debussy's early works are harmonically no more adventurous than existing music by Fauré; in a 2007 book about the piano works, Margery Halford observes that "Two Arabesques" (1888–1891) and "Rêverie" (1890) have "the fluidity and warmth of Debussy's later style" but are not harmonically innovative. Halford cites the popular "Clair de Lune" (1890), the third of the four movements of "Suite Bergamasque", as a transitional work pointing towards the composer's mature style.
Middle works, 1893–1905.
Musicians from Debussy's time onwards have regarded "Prélude à l'après-midi d'un faune" (1894) as his first orchestral masterpiece. Newman considered it "completely original in idea, absolutely personal in style, and logical and coherent from first to last, without a superfluous bar or even a superfluous note"; Pierre Boulez observed, "Modern music was awakened by "Prélude à l'après-midi d'un faune"". Most of the major works for which Debussy is best known were written between the mid-1890s and the mid-1900s. They include the String Quartet (1893), "Pelléas et Mélisande" (1893–1902), the "Nocturnes for Orchestra" (1899) and "La mer" (1903–1905). The suite "Pour le piano" (1894–1901) is, in Halford's view, one of the first examples of the mature Debussy as a composer for the piano: "a major landmark ... and an enlargement of the use of piano sonorities".
In the String Quartet (1893), the gamelan sonorities Debussy had heard four years earlier are recalled in the pizzicatos and cross-rhythms of the scherzo. Debussy's biographer Edward Lockspeiser comments that this movement shows the composer's rejection of "the traditional dictum that string instruments should be predominantly lyrical". The work influenced Ravel, whose own String Quartet, written ten years later, has noticeably Debussian features. The academic and journalist Stephen Walsh calls "Pelléas et Mélisande" (begun 1893, staged 1902) "a key work for the 20th century". The composer Olivier Messiaen was fascinated by its "extraordinary harmonic qualities and ... transparent instrumental texture". The opera is composed in what Alan Blyth describes as a sustained and heightened recitative style, with "sensuous, intimate" vocal lines. It influenced composers as different as Stravinsky and Puccini.
Orledge describes the "Nocturnes" as exceptionally varied in texture, "ranging from the Musorgskian start of 'Nuages', through the approaching brass band procession in 'Fêtes', to the wordless female chorus in 'Sirènes{{'"}}. Orledge considers the last a pre-echo of the marine textures of "La mer". "Estampes" for piano (1903) gives impressions of exotic locations, with further echoes of the gamelan in its pentatonic structures. Debussy believed that since Beethoven, the traditional symphonic form had become formulaic, repetitive and obsolete.{{refn|He described the symphonies of Schumann and Mendelssohn as "respectful repetition"|group= n}} The three-part, cyclic symphony by César Franck (1888) was more to his liking, and its influence can be found in "La mer" (1905); this uses a quasi-symphonic form, its three sections making up a giant sonata-form movement with, as Orledge observes, a cyclic theme, in the manner of Franck. The central "Jeux de vagues" section has the function of a symphonic development section leading into the final "Dialogue du vent et de la mer", "a powerful essay in orchestral colour and sonority" (Orledge) which reworks themes from the first movement. The reviews were sharply divided. Some critics thought the treatment less subtle and less mysterious than his previous works, and even a step backward; others praised its "power and charm", its "extraordinary verve and brilliant fantasy", and its strong colours and definite lines.
Late works, 1906–1917.
Of the later orchestral works, "Images" (1905–1912) is better known than "Jeux" (1913). The former follows the tripartite form established in the "Nocturnes" and "La mer", but differs in employing traditional British and French folk tunes, and in making the central movement, "Ibéria", far longer than the outer ones, and subdividing it into three parts, all inspired by scenes from Spanish life. Although considering "Images" "the pinnacle of Debussy's achievement as a composer for orchestra", Trezise notes a contrary view that the accolade belongs to the ballet score "Jeux". The latter failed as a ballet because of what Jann Pasler describes as a banal scenario, and the score was neglected for some years. Recent analysts have found it a link between traditional continuity and thematic growth within a score and the desire to create discontinuity in a way mirrored in later 20th century music. In this piece, Debussy abandoned the whole-tone scale he had often favoured previously in favour of the octatonic scale with what the Debussy scholar François Lesure describes as its tonal ambiguities.
filename=The Girl with the Flaxen Hair.ogg|title= "La fille aux cheveux de lin"|description=Performed by Mike Ambrose|
Among the late piano works are two books of "Préludes" (1909–10, 1911–13), short pieces that depict a wide range of subjects. Lesure comments that they range from the frolics of minstrels at Eastbourne in 1905 and the American acrobat "General Lavine" "to dead leaves and the sounds and scents of the evening air". "En blanc et noir" (In white and black, 1915), a three-movement work for two pianos, is a predominantly sombre piece, reflecting the war and national danger. The "Études" (1915) for piano have divided opinion. Writing soon after Debussy's death, Newman found them laboured – "a strange last chapter in a great artist's life"; Lesure, writing eighty years later, rates them among Debussy's greatest late works: "Behind a pedagogic exterior, these 12 pieces explore abstract intervals, or – in the last five – the sonorities and timbres peculiar to the piano." In 1914 Debussy started work on a planned set of six sonatas for various instruments. His fatal illness prevented him from completing the set, but those for cello and piano (1915), flute, viola and harp (1915), and violin and piano (1917 – his last completed work) are all concise, three-movement pieces, more diatonic in nature than some of his other late works.
"Le Martyre de saint Sébastien" (1911), originally a five-act musical play to a text by Gabriele D'Annunzio that took nearly five hours in performance, was not a success, and the music is now more often heard in a concert (or studio) adaptation with narrator, or as an orchestral suite of "Fragments symphoniques". Debussy enlisted the help of André Caplet in orchestrating and arranging the score. Two late stage works, the ballets "Khamma" (1912) and "La boîte à joujoux" (1913), were left with the orchestration incomplete, and were completed by Charles Koechlin and Caplet, respectively.
Style.
Debussy and Impressionism.
The application of the term "Impressionist" to Debussy and the music he influenced has been much debated, both during his lifetime and since. The analyst Richard Langham Smith writes that Impressionism was originally a term coined to describe a style of late 19th-century French painting, typically scenes suffused with reflected light in which the emphasis is on the overall impression rather than outline or clarity of detail, as in works by Monet, Pissarro, Renoir and others. Langham Smith writes that the term became transferred to the compositions of Debussy and others which were "concerned with the representation of landscape or natural phenomena, particularly the water and light imagery dear to Impressionists, through subtle textures suffused with instrumental colour".
Among painters, Debussy particularly admired Turner, but also drew inspiration from Whistler. With the latter in mind the composer wrote to the violinist Eugène Ysaÿe in 1894 describing the orchestral "Nocturnes" as "an experiment in the different combinations that can be obtained from one colour – what a study in grey would be in painting."
In this context may be placed Debussy's pantheistic eulogy to Nature, in a 1911 interview with Henry Malherbe:
In contrast to the "impressionistic" characterisation of Debussy's music, several writers have suggested that he structured at least some of his music on rigorous mathematical lines. In 1983 the pianist and scholar Roy Howat published a book contending that certain of Debussy's works are proportioned using mathematical models, even while using an apparent classical structure such as sonata form. Howat suggests that some of Debussy's pieces can be divided into sections that reflect the golden ratio, which is approximated by ratios of consecutive numbers in the Fibonacci sequence. Simon Trezise, in his 1994 book "Debussy: La Mer", finds the intrinsic evidence "remarkable", with the caveat that no written or reported evidence suggests that Debussy deliberately sought such proportions. Lesure takes a similar view, endorsing Howat's conclusions while not taking a view on Debussy's conscious intentions.
Musical idiom.
Debussy wrote "We must agree that the beauty of a work of art will always remain a mystery [...] we can never be absolutely sure 'how it's made.' We must at all costs preserve this magic which is peculiar to music and to which music, by its nature, is of all the arts the most receptive."
Nevertheless, there are many indicators of the sources and elements of Debussy's idiom. Writing in 1958, the critic Rudolph Reti summarised six features of Debussy's music, which he asserted "established a new concept of tonality in European music": the frequent use of lengthy pedal points – "not merely bass pedals in the actual sense of the term, but sustained 'pedals' in any voice"; glittering passages and webs of figurations which distract from occasional absence of tonality; frequent use of parallel chords which are "in essence not harmonies at all, but rather 'chordal melodies', enriched unisons", described by some writers as non-functional harmonies; bitonality, or at least bitonal chords; use of the whole-tone and pentatonic scales; and unprepared modulations, "without any harmonic bridge". Reti concludes that Debussy's achievement was the synthesis of monophonic based "melodic tonality" with harmonies, albeit different from those of "harmonic tonality".
In 1889, Debussy held conversations with his former teacher Guiraud, which included exploration of harmonic possibilities at the piano. The discussion, and Debussy's chordal keyboard improvisations, were noted by a younger pupil of Guiraud, Maurice Emmanuel. The chord sequences played by Debussy include some of the elements identified by Reti. They may also indicate the influence on Debussy of Satie's 1887 "Trois Sarabandes". A further improvisation by Debussy during this conversation included a sequence of whole tone harmonies which may have been inspired by the music of Glinka or Rimsky-Korsakov which was becoming known in Paris at this time. During the conversation, Debussy told Guiraud, "There is no theory. You have only to listen. Pleasure is the law!" – although he also conceded, "I feel free because I have been through the mill, and I don't write in the fugal style because I know it."
Influences.
Musical.
Among French predecessors, Chabrier was an important influence on Debussy (as he was on Ravel and Poulenc); Howat has written that Chabrier's piano music such as "Sous-bois" and "Mauresque" in the "Pièces pittoresques" explored new sound-worlds of which Debussy made effective use 30 years later. Lesure finds traces of Gounod and Massenet in some of Debussy's early songs, and remarks that it may have been from the Russians – Tchaikovsky, Balakirev, Rimsky-Korsakov, Borodin and Mussorgsky – that Debussy acquired his taste for "ancient and oriental modes and for vivid colorations, and a certain disdain for academic rules". Lesure also considers that Mussorgsky's opera "Boris Godunov" directly influenced Debussy's "Pelléas et Mélisande". In the music of Palestrina, Debussy found what he called "a perfect whiteness", and he felt that although Palestrina's musical forms had a "strict manner", they were more to his taste than the rigid rules prevailing among 19th-century French composers and teachers. He drew inspiration from what he called Palestrina's "harmony created by melody", finding an arabesque-like quality in the melodic lines.
Although Debussy was in no doubt of Wagner's stature, he was only briefly influenced by him in his compositions, after "La damoiselle élue" and the "Cinq poèmes de Baudelaire" (both begun in 1887). According to Pierre Louÿs, Debussy "did not see 'what anyone can do beyond Tristan'," although he admitted that it was sometimes difficult to avoid "the ghost of old Klingsor, alias Richard Wagner, appearing at the turning of a bar". After Debussy's short Wagnerian phase, he started to become interested in non-Western music and its unfamiliar approaches to composition. The piano piece "Golliwogg's Cakewalk", from the 1908 suite "Children's Corner", contains a parody of music from the introduction to "Tristan", in which, in the opinion of the musicologist Lawrence Kramer, Debussy escapes the shadow of the older composer and "smilingly relativizes Wagner into insignificance".
A contemporary influence was Erik Satie, according to Nichols Debussy's "most faithful friend" amongst French musicians. Debussy's orchestration in 1896 of Satie's "Gymnopédies" (which had been written in 1887) "put their composer on the map" according to the musicologist Richard Taruskin, and the Sarabande from Debussy's "Pour le piano" (1901) "shows that [Debussy] knew Satie's "Trois Sarabandes" at a time when only a personal friend of the composer could have known them." (They were not published until 1911). Debussy's interest in the popular music of his time is evidenced not only by the "Golliwogg's Cakewalk" and other piano pieces featuring rag-time, such as "The Little Nigar" (Debussy's spelling) (1909), but by the slow waltz "La plus que lente" ("The more than slow"), based on the style of the gipsy violinist at a Paris hotel (to whom he gave the manuscript of the piece).
In addition to the composers who influenced his own compositions, Debussy held strong views about several others. He was for the most part enthusiastic about Richard Strauss and Stravinsky, respectful of Mozart and was in awe of Bach, whom he called the "good God of music" ({{Lang|fr|le Bon Dieu de la musique}}).{{refn|He remarked to a colleague that if Wagner, Mozart and Beethoven could come to his door and ask him to play "Pelléas" to them, he would gladly do so, but if it were Bach, he would be too in awe to dare.|group= n}} His relationship to Beethoven was complex; he was said to refer to him as {{Lang|fr|le vieux sourd}} ('the old deaf one') and asked one young pupil not to play Beethoven's music for "it is like somebody dancing on my grave;" but he believed that Beethoven had profound things to say, yet did not know how to say them, "because he was imprisoned in a web of incessant restatement and of German aggressiveness." He was not in sympathy with Schubert, Schumann, Brahms and Mendelssohn, the latter being described as a "facile and elegant notary".
With the advent of the First World War, Debussy became ardently patriotic in his musical opinions. Writing to Stravinsky, he asked "How could we not have foreseen that these men were plotting the destruction of our art, just as they had planned the destruction of our country?" In 1915 he complained that "since Rameau we have had no purely French tradition [...] We tolerated overblown orchestras, tortuous forms [...] we were about to give the seal of approval to even more suspect naturalizations when the sound of gunfire put a sudden stop to it all." Taruskin writes that some have seen this as a reference to the composers Gustav Mahler and Arnold Schoenberg, both born Jewish. In 1912 Debussy had remarked to his publisher of the opera "Ariane et Barbe-bleue" by the (also Jewish) composer Paul Dukas, "You're right, [it] is a masterpiece – but it's not a masterpiece of French music."
On the other hand, Charles Rosen argued in a review of Taruskin's work that Debussy was instead implying "that [Dukas's] opera was too Wagnerian, too German, to fit his ideal of French style", citing Georges Liébert, one of the editors of Debussy's collected correspondence, as an authority, saying that Debussy was not antisemitic.
Literary.
Despite his lack of formal schooling, Debussy read widely and found inspiration in literature. Lesure writes, "The development of free verse in poetry and the disappearance of the subject or model in painting influenced him to think about issues of musical form." Debussy was influenced by the Symbolist poets. These writers, who included Verlaine, Mallarmé, Maeterlinck and Rimbaud, reacted against the realism, naturalism, objectivity and formal conservatism that prevailed in the 1870s. They favoured poetry using suggestion rather than direct statement; the literary scholar Chris Baldrick writes that they evoked "subjective moods through the use of private symbols, while avoiding the description of external reality or the expression of opinion". Debussy was much in sympathy with the Symbolists' desire to bring poetry closer to music, became friendly with several leading exponents, and set many Symbolist works throughout his career.
Debussy's literary inspirations were mostly French, but he did not overlook foreign writers. As well as Maeterlinck for "Pelléas et Mélisande", he drew on Shakespeare and Dickens for two of his Préludes for piano – "La Danse de Puck" (Book 1, 1910) and "Hommage à S. Pickwick Esq. P.P.M.P.C." (Book 2, 1913). He set Dante Gabriel Rossetti's "The Blessed Damozel" in his early cantata, "La Damoiselle élue" (1888). He wrote incidental music for "King Lear" and planned an opera based on "As You Like It", but abandoned that once he turned his attention to setting Maeterlinck's play. In 1890 he began work on an orchestral piece inspired by Edgar Allan Poe's "The Fall of the House of Usher" and later sketched the libretto for an opera, "La chute de la maison Usher". Another project inspired by Poe – an operatic version of "The Devil in the Belfry" did not progress beyond sketches. French writers whose words he set include Paul Bourget, Alfred de Musset, Théodore de Banville, Leconte de Lisle, Théophile Gautier, Paul Verlaine, François Villon, and Mallarmé – the last of whom also provided Debussy with the inspiration for one of his most popular orchestral pieces, "Prélude à l'après-midi d'un faune".
Influence on later composers.
Debussy is widely regarded as one of the most influential composers of the 20th century. Roger Nichols writes that "if one omits Schoenberg [...] a list of 20th-century composers influenced by Debussy is practically a list of 20th-century composers "tout court"."
Bartók first encountered Debussy's music in 1907 and later said that "Debussy's great service to music was to reawaken among all musicians an awareness of harmony and its possibilities". Not only Debussy's use of whole-tone scales, but also his style of word-setting in "Pelléas et Mélisande", were the subject of study by Leoš Janáček while he was writing his 1921 opera "Káťa Kabanová". Stravinsky was more ambivalent about Debussy's music (he thought "Pelléas" "a terrible bore ... in spite of many wonderful pages") but the two composers knew each other and Stravinsky's "Symphonies of Wind Instruments" (1920) was written as a memorial for Debussy.
In the aftermath of the First World War, the young French composers of Les Six reacted against what they saw as the poetic, mystical quality of Debussy's music in favour of something more hard-edged. Their sympathiser and self-appointed spokesman Jean Cocteau wrote in 1918: "Enough of "nuages", waves, aquariums, "ondines" and nocturnal perfumes," pointedly alluding to the titles of pieces by Debussy. Later generations of French composers had a much more positive relationship with his music. Messiaen was given a score of "Pelléas et Mélisande" as a boy and said that it was "a revelation, love at first sight" and "probably the most decisive influence I have been subject to". Boulez also discovered Debussy's music at a young age and said that it gave him his first sense of what modernity in music could mean.
Among contemporary composers George Benjamin has described "Prélude à l'après-midi d'un faune" as "the definition of perfection"; he has conducted "Pelléas et Mélisande" and the critic Rupert Christiansen detects the influence of the work in Benjamin's opera "Written on Skin" (2012). Others have made orchestrations of some of the piano and vocal works, including John Adams's version of four of the Baudelaire songs ("Le Livre de Baudelaire", 1994), Robin Holloway's of "En blanc et noir" (2002), and Colin Matthews's of both books of "Préludes" (2001–2006).
Recordings.
In 1904, Debussy played the piano accompaniment for Mary Garden in recordings for the Compagnie française du Gramophone of four of his songs: three "mélodies" from the Verlaine cycle "Ariettes oubliées" – "Il pleure dans mon coeur", "L'ombre des arbres" and "Green" – and "Mes longs cheveux", from Act III of "Pelléas et Mélisande". He made a set of piano rolls for the Welte-Mignon company in 1913. They contain fourteen of his pieces: "D'un cahier d'esquisses", "La plus que lente", "La soirée dans Grenade", all six movements of "Children's Corner", and five of the "Preludes": "Danseuses de Delphes", "Le vent dans la plaine", "La cathédrale engloutie", "La danse de Puck" and "Minstrels". The 1904 and 1913 sets have been transferred to compact disc.
Contemporaries of Debussy who made recordings of his music included the pianists Ricardo Viñes (in "Poissons d'or" from "Images" and "La soirée dans Grenade" from "Estampes"); Alfred Cortot (numerous solo pieces as well as the Violin Sonata with Jacques Thibaud and the "Chansons de Bilitis" with Maggie Teyte); and Marguerite Long ("Jardins sous la pluie" and "Arabesques"). Singers in Debussy's mélodies or excerpts from "Pelléas et Mélisande" included Jane Bathori, Claire Croiza, Charles Panzéra and Ninon Vallin; and among the conductors in the major orchestral works were Ernest Ansermet, Désiré-Émile Inghelbrecht, Pierre Monteux and Arturo Toscanini, and in the "Petite Suite", Henri Büsser, who had prepared the orchestration for Debussy. Many of these early recordings have been reissued on CD.
In more recent times Debussy's output has been extensively recorded. In 2018, to mark the centenary of the composer's death, Warner Classics, with contributions from other companies, issued a 33-CD set that is claimed to include all the music Debussy wrote.
|
6261
|
26554001
|
https://en.wikipedia.org/wiki?curid=6261
|
Charles Baxter (author)
|
Charles Morley Baxter (born May 13, 1947) is an American novelist, essayist, and poet.
Biography.
Baxter was born in Minneapolis, Minnesota, to John and Mary Barber (Eaton) Baxter. He graduated from Macalester College in Saint Paul in 1969. In 1974 he received his PhD in English from the University at Buffalo with a thesis on Djuna Barnes, Malcolm Lowry, and Nathanael West.
Baxter taught high school in Pinconning, Michigan for a year before beginning his university teaching career at Wayne State University in Detroit, Michigan. He then moved to the University of Michigan, where for many years he directed the Creative Writing MFA program. He was a visiting professor of creative writing at the University of Iowa and at Stanford. He taught at the University of Minnesota and in the Warren Wilson College MFA Program for Writers. He retired in 2020.
He was awarded a Guggenheim Fellowship in 1985. His short story "Snow" was included in The Norton Anthology of Contemporary Fiction edited in 1998 by R. V. Cassill and Joyce Carol Oates. He received the PEN/Malamud Award in 2021 for Excellence in the Short Story.
He married teacher Martha Ann Hauser in 1976, and has a son. Baxter and Hauser separated.
|
6267
|
7903804
|
https://en.wikipedia.org/wiki?curid=6267
|
Cultural imperialism
|
Cultural imperialism (also cultural colonialism) comprises the cultural dimensions of imperialism. The word "imperialism" describes practices in which a country engages culture (language, tradition, ritual, politics, economics) to create and maintain unequal social and economic relationships among social groups. Cultural imperialism often uses wealth, media power and violence to implement the system of cultural hegemony that legitimizes imperialism.
Cultural imperialism may take various forms, such as an attitude, a formal policy, or military action—insofar as each of these reinforces the empire's cultural hegemony. Research on the topic occurs in scholarly disciplines, and is especially prevalent in communication and media studies, education, foreign policy, history, international relations, linguistics, literature, post-colonialism, science, sociology, social theory, environmentalism, and sports.
Cultural imperialism may be distinguished from the natural process of cultural diffusion. The spread of culture around the world is referred to as cultural globalization.
Background and definitions.
Although the "Oxford English Dictionary" has a 1921 reference to the "cultural imperialism of the Russians", John Tomlinson, in his book on the subject, writes that the term emerged in the 1960s and has been a focus of research since at least the 1970s. Terms such as "media imperialism", "structural imperialism", "cultural dependency and domination", "cultural synchronization", "electronic colonialism", "ideological imperialism", and "economic imperialism" have all been used to describe the same basic notion of cultural imperialism.
The term refers largely to the exercise of power in a cultural relationship in which the principles, ideas, practices, and values of a powerful, invading society are imposed upon indigenous cultures in the occupied areas. The process is often used to describe examples of when the compulsory practices of the cultural traditions of the imperial social group are implemented upon a conquered social group. The process is also present when powerful nations are able to flood the information and media space with their ideas, limiting countries and communities' ability to compete and expose people to locally created content.
Cultural imperialism has been called a process that intends to transition the "cultural symbols of the invading communities from 'foreign' to 'natural,"domestic, comments Jeffrey Herlihy-Mera. He described the process as being carried out in three phases by merchants, then the military, then politicians. While the third phase continues "in perpetuity", cultural imperialism tends to be "gradual, contested (and continues to be contested), and is by nature incomplete. The partial and imperfect configuration of this ontology takes an implicit conceptualization of reality and attempts—and often fails—to elide other forms of collective existence." In order to achieve that end, cultural engineering projects strive to "isolate residents within constructed spheres of symbols" such that they (eventually, in some cases after several generations) abandon other cultures and identify with the new symbols. "The broader intended outcome of these interventions might be described as a common recognition of "possession" of the land itself (on behalf of the organizations publishing and financing the images)."
For Herbert Schiller, cultural imperialism refers to the American Empire's "coercive and persuasive agencies, and their capacity to promote and universalize an American 'way of life' in other countries without any reciprocation of influence." According to Schiller, cultural imperialism "pressured, forced and bribed" societies to integrate with the U.S.'s expansive capitalist model but also incorporated them with attraction and persuasion by winning "the mutual consent, even solicitation of the indigenous rulers." He continues remarks that it is:the sum processes by which a society is brought into the modern [U.S.-centered] world system and how its dominating stratum is attracted, pressured, forced, and sometimes bribed into shaping social institutions to correspond to, or even promote, the values and structures of the dominating centres of the system. The public media are the foremost example of operating enterprises that are used in the penetrative process. For penetration on a significant scale the media themselves must be captured by the dominating/penetrating power. This occurs largely through the commercialization of broadcasting.The historical contexts, iterations, complexities, and politics of Schiller's foundational and substantive theorization of cultural imperialism in international communication and media studies are discussed in detail by political economy of communication researchers Richard Maxwell, Vincent Mosco, Graham Murdock, and Tanner Mirrlees.
Downing and Sreberny-Mohammadi state: "Cultural imperialism signifies the dimensions of the process that go beyond economic exploitation or military force. In the history of colonialism, (i.e., the form of imperialism in which the government of the colony is run directly by foreigners), the educational and media systems of many Third World countries have been set up as replicas of those in Britain, France, or the United States and carry their values. Western advertising has made further inroads, as have architectural and fashion styles. Subtly but powerfully, the message has often been insinuated that Western cultures are superior to the cultures of the Third World."
Poststructuralism.
In poststructuralist and postcolonial theory, "cultural imperialism" is often understood as the cultural legacy of Western colonialism, or forms of social action contributing to the continuation of Western hegemony. To some outside of the realm of this discourse, the term is critiqued as being unclear, unfocused, and/or contradictory in nature.
The work of French philosopher and social theorist Michel Foucault has heavily influenced use of the term "cultural imperialism," particularly his philosophical interpretation of power and his concept of governmentality. Following an interpretation of power similar to that of Machiavelli, Foucault defines power as immaterial, as a "certain type of relation between individuals" that has to do with complex strategic social positions that relate to the subject's ability to control its environment and influence those around itself. According to Foucault, power is intimately tied with his conception of truth. "Truth", as he defines it, is a "system of ordered procedures for the production, regulation, distribution, circulation, and operation of statements" which has a "circular relation" with systems of power. Therefore, inherent in systems of power, is always "truth", which is culturally specific, inseparable from ideology which often coincides with various forms of hegemony, including cultural imperialism.
Foucault's interpretation of governance is also very important in constructing theories of transnational power structure. In his lectures at the , Foucault often defines governmentality as the broad art of "governing", which goes beyond the traditional conception of governance in terms of state mandates, and into other realms such as governing "a household, souls, children, a province, a convent, a religious order, a family". This relates directly back to Machiavelli's treatise on how to retain political power at any cost, "The Prince", and Foucault's aforementioned conceptions of truth and power. (i.e. various subjectivities are created through power relations that are culturally specific, which lead to various forms of culturally specific governmentality such as neoliberal governmentality.)
Post-colonialism.
Edward Saïd is a founding figure of postcolonialism, established with the book "Orientalism" (1978), a humanist critique of The Enlightenment, which criticises Western knowledge of "The East"—specifically the English and the French constructions of what is and what is not "Oriental". Whereby said "knowledge" then led to cultural tendencies towards a binary opposition of the Orient vs. the Occident, wherein one concept is defined in opposition to the other concept, and from which they emerge as of unequal value. In "Culture and Imperialism" (1993), the sequel to "Orientalism", Saïd proposes that, despite the formal end of the "age of empire" after the Second World War (1939–1945), colonial imperialism left a cultural legacy to the (previously) colonised peoples, which remains in their contemporary civilisations; and that said American "cultural imperialism" is very influential in the international systems of power.
In "Can the Subaltern Speak?" Gayatri Chakravorty Spivak critiques common representations in the West of the Sati, as being controlled by authors other than the participants (specifically English colonizers and Hindu leaders). Because of this, Spivak argues that the subaltern, referring to the communities that participate in the Sati, are not able to represent themselves through their own voice. Spivak says that cultural imperialism has the power to disqualify or erase the knowledge and mode of education of certain populations that are low on the social and economic hierarchy.
In "A Critique of Postcolonial Reason", Spivak argues that Western philosophy has a history of not only exclusion of the subaltern from discourse, but also does not allow them to occupy the space of a fully human subject.
Contemporary ideas and debate.
"Cultural imperialism" can refer to either the forced acculturation of a subject population, or to the voluntary embracing of a foreign culture by individuals who do so of their own free will. Since these are two very different referents, the validity of the term has been called into question.
Cultural influence can be seen by the "receiving" culture as either a threat to or an enrichment of its cultural identity. It seems therefore useful to distinguish between cultural imperialism as an (active or passive) attitude of superiority, and the position of a culture or group that seeks to complement its own cultural production, considered partly deficient, with imported products.
The imported products or services can themselves represent, or be associated with, certain values (such as consumerism). According to one argument, the "receiving" culture does not necessarily perceive this link, but instead absorbs the foreign culture passively through the use of the foreign goods and services. Due to its somewhat concealed, but very potent nature, this hypothetical idea is described by some experts as "banal imperialism". For example, it is argued that while "American companies are accused of wanting to control 95 percent of the world's consumers", "cultural imperialism involves much more than simple consumer goods; it involved the dissemination of American principles such as freedom and democracy", a process which "may sound appealing" but which "masks a frightening truth: many cultures around the world are disappearing due to the overwhelming influence of corporate and cultural America".
Some believe that the newly globalised economy of the late 20th and early 21st century has facilitated this process through the use of new information technology. This kind of cultural imperialism is derived from what is called "soft power". The theory of electronic colonialism extends the issue to global cultural issues and the impact of major multi-media conglomerates, ranging from Paramount, WarnerMedia, AT&T, Disney, News Corp, to Google and Microsoft with the focus on the hegemonic power of these mainly United States–based communication giants.
Cultural diversity.
One of the reasons often given for opposing any form of cultural imperialism, voluntary or otherwise, is the preservation of cultural diversity, a goal seen by some as analogous to the preservation of ecological diversity. Proponents of this idea argue either that such diversity is valuable in itself, to preserve human historical heritage and knowledge, or instrumentally valuable because it makes available more ways of solving problems and responding to catastrophes, natural or otherwise.
Africa.
Of all the areas of the world that scholars have claimed to be adversely affected by imperialism, Africa is probably the most notable. In the expansive "age of imperialism" of the nineteenth century, scholars have argued that European colonisation in Africa has led to the elimination of many various cultures, worldviews, and epistemologies, particularly through neocolonisation of public education. This, arguably has led to uneven development, and further informal forms of social control having to do with culture and imperialism. A variety of factors, scholars argue, lead to the elimination of cultures, worldviews, and epistemologies, such as "de-linguicization" (replacing native African languages with European ones), devaluing ontologies that are not explicitly individualistic, and at times going as far as to not only define Western culture itself as science, but that non-Western approaches to science, the Arts, indigenous culture, etc. are not even knowledge. One scholar, Ali A. Abdi, claims that imperialism inherently "involve[s] extensively interactive regimes and heavy contexts of identity deformation, misrecognition, loss of self-esteem, and individual and social doubt in self-efficacy." Therefore, all imperialism would always, already be cultural.
Neoliberalism.
Neoliberalism is often critiqued by sociologists, anthropologists, and cultural studies scholars as being culturally imperialistic. Critics of neoliberalism, at times, claim that it is the newly predominant form of imperialism. Other scholars, such as Elizabeth Dunn and Julia Elyachar have claimed that neoliberalism requires and creates its own form of governmentality.
In Dunn's work, "Privatizing Poland", she argues that the expansion of the multinational corporation, Gerber, into Poland in the 1990s imposed Western, neoliberal governmentality, ideologies, and epistemologies upon the post-Soviet people hired. Cultural conflicts occurred most notably the company's inherent individualistic policies, such as promoting competition among workers rather than cooperation, and in its strong opposition to what the company owners claimed was bribery.
In Elyachar's work, "Markets of Dispossession", she focuses on ways in which, in Cairo, NGOs along with INGOs and the state promoted neoliberal governmentality through schemas of economic development that relied upon "youth microentrepreneurs". Youth microentrepreneurs would receive small loans to build their own businesses, similar to the way that microfinance supposedly operates. Elyachar argues though, that these programs not only were a failure, but that they shifted cultural opinions of value (personal and cultural) in a way that favoured Western ways of thinking and being.
Development studies.
Often, methods of promoting development and social justice are critiqued as being imperialistic in a cultural sense. For example, Chandra Mohanty has critiqued Western feminism, claiming that it has created a misrepresentation of the "third world woman" as being completely powerless, unable to resist male dominance. Thus, this leads to the often critiqued narrative of the "white man" saving the "brown woman" from the "brown man". Other, more radical critiques of development studies, have to do with the field of study itself. Some scholars even question the intentions of those developing the field of study, claiming that efforts to "develop" the Global South were never about the South itself. Instead, these efforts, it is argued, were made in order to advance Western development and reinforce Western hegemony.
Media effects studies.
The core of cultural imperialism thesis is integrated with the political-economy traditional approach in media effects research. Critics of cultural imperialism commonly claim that non-Western cultures, particularly from the Third World, will forsake their traditional values and lose their cultural identities when they are solely exposed to Western media. Nonetheless, Michael B. Salwen, in his book "Critical Studies in Mass Communication" (1991), claims that cross-consideration and integration of empirical findings on cultural imperialist influences is very critical in terms of understanding mass media in the international sphere. He recognises both of contradictory contexts on cultural imperialist impacts.
The first context is where cultural imperialism imposes socio-political disruptions on developing nations. Western media can distort images of foreign cultures and provoke personal and social conflicts to developing nations in some cases.
Another context is that peoples in developing nations resist to foreign media and preserve their cultural attitudes. Although he admits that outward manifestations of Western culture may be adopted, but the fundamental values and behaviours remain still. Furthermore, positive effects might occur when male-dominated cultures adopt the "liberation" of women with exposure to Western media and it stimulates ample exchange of cultural exchange.
Criticisms of "cultural imperialism theory".
Critics of scholars who discuss cultural imperialism have a number of critiques. "Cultural imperialism" is a term that is only used in discussions where cultural relativism and constructivism are generally taken as true. (One cannot critique promoting Western values if one believes that said values are good. Similarly, one cannot argue that Western epistemology is unjustly promoted in non-Western societies if one believes that those epistemologies are good.) Therefore, those who disagree with cultural relativism and/or constructivism may critique the employment of the term, "cultural imperialism" on those terms.
John Tomlinson provides a critique of cultural imperialism theory and reveals major problems in the way in which the idea of cultural, as opposed to economic or political, imperialism is formulated. In his book "Cultural Imperialism: A Critical Introduction", he delves into the much debated "media imperialism" theory. Summarizing research on the Third World's reception of American television shows, he challenges the cultural imperialism argument, conveying his doubts about the degree to which US shows in developing nations actually carry US values and improve the profits of US companies. Tomlinson suggests that cultural imperialism is growing in some respects, but local transformation and interpretations of imported media products propose that cultural diversification is not at an end in global society. He explains that one of the fundamental conceptual mistakes of cultural imperialism is to take for granted that the distribution of cultural goods can be considered as cultural dominance. He thus supports his argument highly criticising the concept that Americanization is occurring through global overflow of American television products. He points to a myriad of examples of television networks who have managed to dominate their domestic markets and that domestic programs generally top the ratings. He also doubts the concept that cultural agents are passive receivers of information. He states that movement between cultural/geographical areas always involves translation, mutation, adaptation, and the creation of hybridity.
Other key critiques are that the term is not defined well, and employs further terms that are not defined well, and therefore lacks explanatory power, that "cultural imperialism" is hard to measure, and that the theory of a legacy of colonialism is not always true.
Dealing with cultural dominance.
David Rothkopf, managing director of Kissinger Associates and an adjunct professor of international affairs at Columbia University (who also served as a senior U.S. Commerce Department official in the Clinton Administration), wrote about cultural imperialism in his provocatively titled "In Praise of Cultural Imperialism?" in the summer 1997 issue of "Foreign Policy" magazine. Rothkopf says that the United States should embrace "cultural imperialism" as in its self-interest. But his definition of cultural imperialism stresses spreading the values of tolerance and openness to cultural change in order to avoid war and conflict between cultures as well as expanding accepted technological and legal standards to provide free traders with enough security to do business with more countries. Rothkopf's definition almost exclusively involves allowing individuals in other nations to accept or reject foreign cultural influences. He also mentions, but only in passing, the use of the English language and consumption of news and popular music and film as cultural dominance that he supports. Rothkopf additionally makes the point that globalisation and the Internet are accelerating the process of cultural influence.
Culture is sometimes used by the organisers of society—politicians, theologians, academics, and families—to impose and ensure order, the rudiments of which change over time as need dictates. One need only look at the 20th century's genocides. In each one, leaders used culture as a political front to fuel the passions of their armies and other minions and to justify their actions among their people.
Rothkopf then cites genocide and s in Armenia, Russia, the Holocaust, Cambodia, Bosnia and Herzegovina, Rwanda and East Timor as examples of culture (in some cases expressed in the ideology of "political culture" or religion) being misused to justify violence. He also acknowledges that cultural imperialism in the past has been guilty of forcefully eliminating the cultures of natives in the Americas and in Africa, or through use of the Inquisition, "and during the expansion of virtually every empire." The most important way to deal with cultural influence in any nation, according to Rothkopf, is to promote tolerance and allow, or even promote, cultural diversities that are compatible with tolerance and to eliminate those cultural differences that cause violent conflict:
Successful multicultural societies, be they nations, federations, or other conglomerations of closely interrelated states, discern those aspects of culture that do not threaten union, stability, or prosperity (such as food, holidays, rituals, and music) and allow them to flourish. But they counteract or eradicate the more subversive elements of culture (exclusionary aspects of religion, language, and political/ideological beliefs). History shows that bridging cultural gaps successfully and serving as a home to diverse peoples requires certain social structures, laws, and institutions that transcend culture. Furthermore, the history of a number of ongoing experiments in multiculturalism, such as in the European Union, India, South Africa, Canada and the United States, suggests that workable, if not perfected, integrative models exist. Each is built on the idea that tolerance is crucial to social well-being, and each at times has been threatened by both intolerance and a heightened emphasis on cultural distinctions. The greater public good warrants eliminating those cultural characteristics that promote conflict or prevent harmony, even as less-divisive, more personally observed cultural distinctions are celebrated and preserved.
Cultural dominance can also be seen in the 1930s in Australia where the Aboriginal Assimilation Policy acted as an attempt to wipe out the Native Australian people. The British settlers tried to biologically alter the skin colour of the Australian Aboriginal people through mixed breeding with white people. The policy also made attempts to forcefully conform the Aborigines to western ideas of dress and education.
In history.
Although the term was popularised in the 1960s, and was used by its original proponents to refer to cultural hegemonies in a post-colonial world, cultural imperialism has also been used to refer to times further in the past.
Antiquity.
The Ancient Greeks have a reputation for spreading their culture around the Mediterranean and Near East through trade and conquest. During the Archaic Period ( to ), the burgeoning Greek city-states established settlements and colonies across the Mediterranean Sea, especially in Sicily and southern Italy, influencing the Etruscan and Roman peoples of the region. Greek art affected the style of Scythian artworks through Greek trading colonies in the Black Sea region. In the late-fourth century BC, Alexander the Great conquered Persian and Indian territories all the way to the Indus River Valley and Punjab, spreading Greek religion, art, and science along the way. This resulted in the rise of Hellenistic kingdoms and cities across Egypt, the Near East, Central Asia, and Northwest India, where Greek culture fused with the cultures of the existing populations. The Greek influence prevailed even longer in science and literature: medieval Muslim scholars in the Middle East studied the writings of Aristotle for scientific insights.
The Roman Empire also implemented cultural imperialism. Early Rome, in its conquest of Italy, assimilated the people of Etruria by replacing the Etruscan language with Latin, which led to the demise of that language and of many aspects of Etruscan civilisation. Cultural Romanization grew in many parts of Rome's empire: "many regions receiving Roman culture unwillingly, as a form of cultural imperialism." After Roman armies conquered Greece, Rome set about altering the culture of Greece to conform with Roman ideals. For instance, the Greek habit of stripping naked, in public, for exercise, was looked on askance by Roman writers, who considered the practice to be a cause of the Greeks' effeminacy and enslavement. The Roman example has been linked to modern instances of European imperialism in African countries, bridging the two instances with Slavoj Zizek's discussions of "empty signifiers". The was secured in the empire, in part, by the "forced acculturation of the culturally diverse populations that Rome had conquered." The first documented imperialist occupation of Britain dates from this period.
British Empire.
British worldwide expansion in the 18th and 19th centuries was an economic and political phenomenon. However, "there was also a strong social and cultural dimension to it, which Rudyard Kipling termed the 'white man's burden'." One of the ways this was carried out was by religious proselytising, by, amongst others, the London Missionary Society, which was "an agent of British cultural imperialism." Another way, was by the imposition of educational material on the colonies for an "imperial curriculum". Robin A. Butlin writes, "The promotion of empire through books, illustrative materials, and educational syllabuses was widespread, part of an education policy geared to cultural imperialism". This was also true of science and technology in the empire. Douglas M. Peers and Nandini Gooptu note that "Most scholars of colonial science in India now prefer to stress the ways in which science and technology worked in the service of colonialism, as both a 'tool of empire' in the practical sense and as a vehicle for cultural imperialism. In other words, science developed in India in ways that reflected colonial priorities, tending to benefit Europeans at the expense of Indians, while remaining dependent on and subservient to scientific authorities in the colonial metropolis." British sports were spread across the Empire partially as a way of encouraging British values and cultural uniformity, though this was tempered by the fact that colonised peoples gained a sense of nationalistic pride by defeating the British in their own sports.
The analysis of cultural imperialism carried out by Edward Said drew principally from a study of the British Empire. According to Danilo Raponi, the cultural imperialism of the British in the 19th century had a much wider effect than only in the British Empire. He writes, "To paraphrase Said, I see cultural imperialism as a complex cultural hegemony of a country, Great Britain, that in the 19th century had no rivals in terms of its ability to project its power across the world and to influence the cultural, political and commercial affairs of most countries. It is the 'cultural hegemony' of a country whose power to export the most fundamental ideas and concepts at the basis of its understanding of 'civilisation' knew practically no bounds." In this, for example, Raponi includes Italy.
Other pre-Second World War examples.
The New Cambridge Modern History writes about the cultural imperialism of Napoleonic France. Napoleon used the Institut de France "as an instrument for transmuting French universalism into cultural imperialism." Members of the institute (who included Napoleon), descended upon Egypt in 1798. "Upon arrival they organised themselves into an Institute of Cairo. The Rosetta Stone is their most famous find. The science of Egyptology is their legacy."
After the First World War, Germans were worried about the extent of French influence in the occupied Rhineland, which under the terms of the Treaty of Versailles was under Allied control from 1918 to 1930. An early use of the term appeared in an essay by Paul Ruhlmann (as "Peter Hartmann") at that date, entitled "French Cultural Imperialism on the Rhine".
North American colonisation.
Keeping in line with the trends of international imperialistic endeavours, the expansion of Canadian and American territory in the 19th century saw cultural imperialism employed as a means of control over indigenous populations. This, when used in conjunction of more traditional forms of ethnic cleansing and genocide in the United States, saw devastating, lasting effects on indigenous communities.
In 2017 Canada celebrated its 150-year anniversary of the confederating of three British colonies. As Catherine Murton Stoehr points out in "Origins", a publication organised by the history departments of Ohio State University and Miami University, the occasion came with remembrance of Canada's treatment of First Nations people.
Numerous policies focused on indigenous persons came into effect shortly thereafter. Most notable is the use of residential schools across Canada as a means to remove indigenous persons from their culture and instill in them the beliefs and values of the majorised colonial hegemony. The policies of these schools, as described by Ward Churchill in his book "Kill the Indian, Save the Man", were to forcefully assimilate students who were often removed with force from their families. These schools forbid students from using their native languages and participating in their own cultural practices. Residential schools were largely run by Christian churches, operating in conjunction with Christian missions with minimal government oversight. The book, "Stolen Lives: The Indigenous peoples of Canada and the Indian Residentials Schools", describes this form of operation: "The government provided little leadership, and the clergy in charge were left to decide what to teach and how to teach it. Their priority was to impart the teachings of their church or order—not to provide a good education that could help students in their post-graduation lives." In a "New York Times" op-ed, Gabrielle Scrimshaw describes her grandparents being forced to send her mother to one of these schools or risk imprisonment. After hiding her mother on "school pick up day" so as to avoid sending their daughter to institutions whose abuse was well known at the time (mid-20th century). Scrimshaw's mother was left with limited options for further education she says and is today illiterate as a result. Scrimshaw explains, "Seven generations of my ancestors went through these schools. Each new family member enrolled meant a compounding of abuse and a steady loss of identity, culture and hope. My mother was the last generation. the experience left her broken, and like so many, she turned to substances to numb these pains." A report, republished by CBC News, estimates nearly 6,000 children died in the care of these schools.
The colonisation of native peoples in North America remains active today despite the closing of the majority of residential schools. This form of cultural imperialism continues in the use of Native Americans as mascots for schools and athletic teams. Jason Edward Black, a professor and chair in the Department of Communication Studies at the University of North Carolina at Charlotte, describes how the use of Native Americans as mascots furthers the colonial attitudes of the 18th and 19th centuries.
In "Deciphering Pocahontas", Kent Ono and Derek Buescher wrote: "Euro-American culture has made a habit of appropriating, and redefining what is 'distinctive' and constitutive of Native Americans."
Nazi colonialism.
"Cultural imperialism" has also been used in connection with the expansion of German influence under the Nazis in the middle of the twentieth century. Alan Steinweis and Daniel Rogers note that even before the Nazis came to power, "Already in the Weimar Republic, German academic specialists on eastern Europe had contributed through their publications and teaching to the legitimization of German territorial revanchism and cultural imperialism. These scholars operated primarily in the disciplines of history, economics, geography, and literature." In the area of music, Michael Kater writes that during the WWII German occupation of France, Hans Rosbaud, a German conductor based by the Nazi regime in Strasbourg, became "at least nominally, a servant of Nazi cultural imperialism directed against the French."
In Italy during the war, Germany pursued "a European cultural front that gravitates around German culture". The Nazi propaganda minister Joseph Goebbels set up the European Union of Writers, "one of Goebbels's most ambitious projects for Nazi cultural hegemony. Presumably a means of gathering authors from Germany, Italy, and the occupied countries to plan the literary life of the new Europe, the union soon emerged as a vehicle of German cultural imperialism." For other parts of Europe, Robert Gerwarth, writing about cultural imperialism and Reinhard Heydrich, states that the "Nazis' Germanization project was based on a historically unprecedented programme of racial stock-taking, theft, expulsion and murder." Also, "The full integration of the [Czech] Protectorate into this New Order required the complete Germanization of the Protectorate's cultural life and the eradication of indigenous Czech and Jewish culture."
The actions by Nazi Germany reflect on the notion of race and culture playing a significant role in imperialism. The idea that there is a distinction between the Germans and the Jews has created the illusion of Germans believing they were superior to the Jewish inferiors, the notion of us/them and self/others.
Western imperialism.
Cultural imperialism manifests in the Western world in the form legal system to include commodification and marketing of indigenous resources (example medicinal, spiritual or artistic) and genetic resources (example human DNA).
Americanization.
The terms "McDonaldization", "Disneyization" and "Cocacolonization" have been coined to describe the spread of Western cultural influence, especially after the end of the Cold War. These Western influences often have personal, social, economical, and historical impact on people globally depending on the country and region. “Virtually all countries are moving discernibly toward the U.S. model, and the process is self reinforcing”, Herman, E. and McChesney, R. (n.d.). "Media Globalization: The US Experience and Influence".
There are many countries affected by the US and their pop-culture. For example, the film industry in Nigeria referred to as "Nollywood" being the second largest as it produces more films annually than the United States, their films are shown across Africa. Another term that describes the spread of Western cultural influence is "Hollywoodization" it is when American culture is promoted through Hollywood films which can culturally affect the viewers of Hollywood films.
See also.
Japanization
|
6271
|
48970003
|
https://en.wikipedia.org/wiki?curid=6271
|
Chemical reaction
|
A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. When chemical reactions occur, the atoms are rearranged and the reaction is accompanied by an energy change as new products are generated. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Some reactions produce heat and are called exothermic reactions, while others may require heat to enable the reaction to occur, which are called endothermic reactions. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
A reaction may be classified as redox in which oxidation and reduction occur or non-redox in which there is no oxidation and reduction occurring. Most simple redox reactions may be classified as a combination, decomposition, or single displacement reaction.
Different chemical reactions are used during chemical synthesis in order to obtain the desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperature and concentrations present within a cell.
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays and reactions between elementary particles, as described by quantum field theory.
History.
Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur.
The artificial production of chemical substances already was a central goal for medieval alchemists. Examples include the synthesis of ammonium chloride from organic substances as described in the works (c. 850–950) attributed to Jābir ibn Ḥayyān, or the production of mineral acids such as sulfuric and nitric acids by later alchemists, starting from c. 1300. The production of mineral acids involved the heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis.
From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as a reaction with oxygen from the air.
Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations.
Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions.
Characteristics.
The general characteristics of chemical reactions are:
Equations.
Chemical equations are used to graphically illustrate chemical reactions. They consist of chemical or structural formulas of the reactants on the left and those of the products on the right. They are separated by an arrow (→) which indicates the direction and type of the reaction; the arrow is read as the word "yields". The tip of the arrow points in the direction in which the reaction proceeds. A double arrow () pointing in opposite directions is used for equilibrium reactions. Equations should be balanced according to the stoichiometry, the number of atoms of each species should be the same on both sides of the equation. This is achieved by scaling the number of involved molecules (A, B, C and D in a schematic example below) by the appropriate integers "a, b, c" and "d".
More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign.
Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions.
Elementary reactions.
The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products. Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time.
The most important elementary reactions are unimolecular and bimolecular reactions. Only one molecule is involved in a unimolecular reaction; it is transformed by isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa.
In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions.
<chem>AB -> A + B</chem>
For bimolecular reactions, two molecules collide and react with each other. Their merger is called chemical synthesis or an addition reaction.
<chem>A + B -> AB</chem>
Another possibility is that only a portion of one molecule is transferred to the other molecule. This type of reaction occurs, for example, in redox and acid-base reactions. In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton. This type of reaction is also called metathesis.
<chem>HA + B -> A + HB</chem>
for example
<chem>NaCl + AgNO3 -> NaNO3 + AgCl(v)</chem>
Chemical equilibrium.
Most chemical reactions are reversible; that is, they can and do run in both directions. The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with the time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on parameters such as temperature, pressure, and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy of reaction must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with fewer moles of gas.
The reaction yield stabilizes at equilibrium but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure. A change in the concentrations of the reactants does not affect the equilibrium constant but does affect the equilibrium position.
Thermodynamics.
Chemical reactions are determined by the laws of thermodynamics. Reactions can proceed by themselves if they are exergonic, that is if they release free energy. The associated free energy change of the reaction is composed of the changes of two different thermodynamic quantities, enthalpy and entropy:
; formula_1.
Reactions can be exothermic, where Δ"H" is negative and energy is released. Typical examples of exothermic reactions are combustion, precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous or dissolved reaction products, which have higher entropy. Since the entropy term in the free-energy change increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur preferably at lower temperatures. A change in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide:
<chem>2CO(g) + MoO2(s) -> 2CO2(g) + Mo(s)</chem>; formula_2
This reaction to form carbon dioxide and molybdenum is endothermic at low temperatures, becoming less so with increasing temperature. Δ"H"° is zero at , and the reaction becomes exothermic above that temperature.
Changes in temperature can also reverse the direction tendency of a reaction. For example, the water gas shift reaction
<chem>CO(g) + H2O({v}) <=> CO2(g) + H2(g)</chem>
is favored by low temperatures, but its reverse is favored by high temperatures. The shift in reaction direction tendency occurs at .
Reactions can also be characterized by their internal energy change, which takes into account changes in the entropy, volume and chemical potentials. The latter depends, among other things, on the activities of the involved substances.
; formula_3
Kinetics.
The speed at which reactions take place is studied by reaction kinetics. The rate depends on various parameters, such as:
Several theories allow calculating the reaction rates at the molecular level. This field is referred to as reaction dynamics. The rate "v" of a first-order reaction, which could be the disintegration of a substance A, is given by:
formula_4
Its integration yields:
formula_5
Here "k" is the first-order rate constant, having dimension 1/time, [A]("t") is the concentration at a time "t" and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with a characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation:
formula_6
where "E"a is the activation energy and "k"B is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory.
Reaction types.
Four basic types.
Synthesis.
In a synthesis reaction, two or more simple substances combine to form a more complex substance. These reactions are in the general form:
<chem display="block">A + B->AB</chem>
Two or more reactants yielding one product is another way to identify a synthesis reaction. One example of a synthesis reaction is the combination of iron and sulfur to form iron(II) sulfide:
<chem display="block">8Fe + S8->8FeS</chem>
Another example is simple hydrogen gas combined with simple oxygen gas to produce a more complex substance, such as water.
Decomposition.
A decomposition reaction is when a more complex substance breaks down into its more simple parts. It is thus the opposite of a synthesis reaction and can be written as
<chem display="block">AB->A + B</chem>
One example of a decomposition reaction is the electrolysis of water to make oxygen and hydrogen gas:
<chem display="block">2H2O->2H2 + O2</chem>
Single displacement.
In a single displacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound. These reactions come in the general form of:
<chem display="block">A + BC->AC + B</chem>
One example of a single displacement reaction is when magnesium replaces hydrogen in water to make solid magnesium hydroxide and hydrogen gas:
<chem display="block">Mg + 2H2O->Mg(OH)2 (v) + H2 (^)</chem>
Double displacement.
In a double displacement reaction, the anions and cations of two compounds switch places and form two entirely different compounds. These reactions are in the general form:
<chem display="block">AB + CD->AD + CB</chem>
For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2.
Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate:
<chem display="block">Pb(NO3)2 + 2KI->PbI2(v) + 2KNO3</chem>
Forward and backward reactions.
According to Le Chatelier's Principle, reactions may proceed in the forward or reverse direction until they end or reach equilibrium.
Forward reactions.
Reactions that proceed in the forward direction (from left to right) to approach equilibrium are often called spontaneous reactions, that is, formula_7 is negative, which means that if they occur at constant temperature and pressure, they decrease the Gibbs free energy of the reaction. They require less energy to proceed in the forward direction. Reactions are usually written as forward reactions in the direction in which they are spontaneous. Examples:
Backward reactions.
Reactions that proceed in the backward direction to approach equilibrium are often called non-spontaneous reactions, that is, formula_7 is positive, which means that if they occur at constant temperature and pressure, they increase the Gibbs free energy of the reaction. They require input of energy to proceed in the forward direction. Examples include:
Combustion.
In a combustion reaction, an element or compound reacts with an oxidant, usually oxygen, often producing energy in the form of heat or light. Combustion reactions frequently involve a hydrocarbon. For instance, the combustion of 1 mole (114 g) of octane in oxygen
<chem display="block">C8H18(l) + 25/2 O2(g)->8CO2 + 9H2O(l)</chem>
releases 5500 kJ. A combustion reaction can also result from carbon, magnesium or sulfur reacting with oxygen.
<chem display="block">2Mg(s) + O2->2MgO(s)</chem>
<chem display="block">S(s) + O2(g)->SO2(g)</chem>
Oxidation and reduction.
Redox reactions can be understood in terms of the transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is "oxidized" and the latter is "reduced". Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state of atoms and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
In the following redox reaction, hazardous sodium metal reacts with toxic chlorine gas to form the ionic compound sodium chloride, or common table salt:
<chem display="block">2Na(s) + Cl2(g)->2NaCl(s)</chem>
In the reaction, sodium metal goes from an oxidation state of 0 (a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized. On the other hand, the chlorine gas goes from an oxidation of 0 (also a pure element) to −1: the chlorine gains one electron and is said to have been reduced. Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent. Conversely, the sodium is oxidized or is the electron donor, and thus induces a reduction in the other species and is considered the "reducing agent".
Which of the involved reactants would be a reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativities, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many oxides or ions with high oxidation numbers of their non-oxygen atoms, such as , , , , or , can gain one or two extra electrons and are strong oxidizing agents.
For some main-group elements the number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron, respectively. Noble gases themselves are chemically inactive.
The overall redox reaction can be balanced by combining the oxidation and reduction half-reactions multiplied by coefficients such that the number of electrons lost in the oxidation equals the number of electrons gained in the reduction.
An important class of redox reactions are the electrolytic electrochemical reactions, where electrons from the power supply at the negative electrode are used as the reducing agent and electron withdrawal at the positive electrode as the oxidizing agent. These reactions are particularly important for the production of chemical elements, such as chlorine or aluminium. The reverse process, in which electrons are released in redox reactions and chemical energy is converted to electrical energy, is possible and used in batteries.
Complexation.
In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom.
Acid–base reactions.
In the Brønsted–Lowry acid–base theory, an acid–base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid. In other words, acids act as proton donors and bases act as proton acceptors according to the following equation:
<chem display="block">\underset{acid}{HA} + \underset{base}{B} <=> \underset{conjugated\ base}{A^-} + \underset{conjugated\ acid}{HB+}</chem>
The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium. The equilibrium is determined by the acid and base dissociation constants ("K"a and "K"b) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at the exact same amounts, form a neutral salt.
Acid-base reactions can have different definitions depending on the acid-base concept employed. Some of the most common are:
Precipitation.
Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by the removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and a slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts.
Solid-state reactions.
Reactions can take place between two solids. However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area.
Reactions at the solid/gas interface.
The reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid|gas interface are in some cases related to catalysis.
Photochemical reactions.
In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert it into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions.
Many important processes involve photochemistry. The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin. In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence. Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry.
Catalysis.
In catalysis, the reaction does not proceed directly, but through a reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, forming weak bonds with reactants or intermediates, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid-liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction that is kinetically inhibited by high activation energy can take place in the circumvention of this activation energy.
Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area. Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes.
Reactions in organic chemistry.
In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involves covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers.
One of the most industrially important reactions is the cracking of heavy hydrocarbons at oil refineries to create smaller, simpler molecules. This process is used to manufacture gasoline. Specific types of organic reactions may be grouped by their reaction mechanisms (particularly substitution, addition and elimination) or by the types of products they produce (for example, methylation, polymerisation and halogenation).
Substitution.
In a substitution reaction, a functional group in a particular chemical compound is replaced by another group. These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution.
In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular.
The SN1 reaction proceeds in two steps. First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile.
In the SN2 mechanisms, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved. These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers ("cis/trans"). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism.
Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2.
In the third type of substitution reaction, radical substitution, the attacking particle is a radical. This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine.
;<chem>X. + R-H -> X-H + R.</chem>
;<chem>R. + X2 -> R-X + X.</chem>
Addition and elimination.
The addition and its counterpart, the elimination, are reactions that change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms that are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, the formation of the double bond, takes place with the elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires the participation of a base. Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution.
The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate. In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group. Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution.
The counterpart of elimination is an addition where double or triple bonds are converted into single bonds. Similar to substitution reactions, there are several types of additions distinguished by the type of the attacking particle. For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule. This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms."
If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible. In this case, one has to use the hydroboration–oxidation reaction, wherein the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. In the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom.
While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role in the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with elimination so that after the reaction the carbonyl group is present again. It is, therefore, called an addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile.
Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta-unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds.
Some additions which can not be executed with nucleophiles and electrophiles can be succeeded with free radicals. As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization.
Other organic reaction mechanisms.
In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement.
Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system.
Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in a different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light. Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules.
Biochemical reactions.
Biochemical reactions are mainly controlled by complex proteins called enzymes, which are usually specialized to catalyze only a single, specific reaction. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others.
The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units. Bioenergetics studies the sources of energy for such reactions. Important energy sources are glucose and oxygen, which can be produced by plants via photosynthesis or assimilated from food and air, respectively. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions. Decomposition of organic material by fungi, bacteria and other micro-organisms is also within the scope of biochemistry.
Applications.
Chemical reactions are central to chemical engineering, where they are used for the synthesis of new compounds from natural raw materials such as petroleum, mineral ores, and oxygen in air. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the number of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate.
Some specific reactions have their niche applications. For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas.
Monitoring.
Mechanisms of monitoring chemical reactions depend strongly on the reaction rate. Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients. Important tools of real-time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is the introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze the redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at a time scaled down to a few femtoseconds.
|
6276
|
20931057
|
https://en.wikipedia.org/wiki?curid=6276
|
Casiquiare canal
|
The Casiquiare river or canal () is a natural distributary of the upper Orinoco flowing southward into the Rio Negro, in Venezuela, South America. As such, it forms a unique natural canal between the Orinoco and Amazon river systems. It is the world's largest river of the kind that links two major river systems, a so-called bifurcation. The area forms a water divide, more dramatically at regional flood stage.
Etymology.
The name "Casiquiare", first used in that form by Manuel Román, likely derives from the Ye'kuana language name of the river, "Kashishiwadi".
Discovery.
The first European to describe it was Spanish Jesuit missionary and explorer Cristóbal Diatristán de Acuña in 1639.
In 1744 a Jesuit priest named Manuel Román, while ascending the Orinoco River in the region of La Esmeralda, met some Portuguese slave-traders from the settlements on the Rio Negro. The Portuguese insisted they were not in Spanish territory but on a tributary of the Amazon; they invited Román back with them to prove their claim. He accompanied them on their return, by way of the Casiquiare canal, and afterwards retraced his route to the Orinoco. Along the way, he made first contact with the Ye'kuana people, whom he enlisted to help in his journey. Charles Marie de La Condamine, seven months later, was able to give to the "Académie française" an account of Father Román's voyage, and thus confirm the existence of this waterway, first reported by Father Acuña in 1639.
Little credence was given to Román's statement until it was verified, in 1756, by the Spanish Boundary-line Commission of José Yturriaga and Solano. In 1800 German scientist Alexander von Humboldt and French botanist Aimé Bonpland explored the river. In 1968 the Casiquiare was navigated by an SRN6 hovercraft during a The Geographical Journal expedition.
Geography.
The origin of the Casiquiare, at the River Orinoco, is below the mission of La Esmeralda at , and about above sea level. Its mouth at the Rio Negro, an affluent of the Amazon River, is near the town of San Carlos and is above sea level.
The general course is south-west, and its length, including windings, is about . Its width, at its bifurcation with the Orinoco, is approximately , with a current towards the Rio Negro of . However, as it gains in volume from the very numerous tributary streams, large and small, that it receives en route, its velocity increases, and in the wet season reaches , even in certain stretches. It broadens considerably as it approaches its mouth, where it is about wide. The volume of water the Casiquiare captures from the Orinoco is small in comparison to what it accumulates in its course. Nevertheless, the geological processes are ongoing, and evidence points to a slow and gradual increase in the size of Casiquiare. It is likely that stream capture is in progress, i.e. what currently is the uppermost Orinoco basin, including Cunucunuma River, eventually will be entirely diverted by the Casiquiare into the Amazon basin.
In flood time, it is said to have a second connection with the Rio Negro by a branch, which it throws off to the westward, called the Itinivini, which leaves it at a point about above its mouth. In the dry season, it has shallows, and is obstructed by sandbanks, a few rapids and granite rocks. Its shores are densely wooded, and the soil more fertile than that along the Rio Negro. The general slope of the plains through which the canal runs is south-west, but those of the Rio Negro slope south-east.
The Casiquiare is not a sluggish canal on a flat tableland, but a great, rapid river which, if its upper waters had not found contact with the Orinoco, perhaps by cutting back, would belong entirely to the Negro branch of the Amazon.
To the west of the Casiquiare, there is a much shorter and easier portage between the Orinoco and Amazon basins, called the isthmus of Pimichin, which is reached by ascending the Temi branch of the Atabapo River, an affluent of the Orinoco. Although the Temi is somewhat obstructed, it is believed that it could easily be made navigable for small craft. The isthmus is across, with undulating ground, nowhere over high, with swamps and marshes. In the early 20th century, it was much used for the transit of large canoes, which were hauled across it from the Temi River and reached the Rio Negro by a little stream called the Pimichin.
Hydrographic divide.
The Casiquiare canal – Orinoco River hydrographic divide is a representation of the hydrographic water divide that delineates the separation between the Orinoco Basin and the Amazon Basin. (The Orinoco Basin flows west–north–northeast into the Caribbean; the Amazon Basin flows east into the western Atlantic in the extreme northeast of Brazil.)
Essentially the river divide is a west-flowing, upriver section of Venezuela's Orinoco River with an outflow to the south into the Amazon Basin. This named outflow is the Casiquiare canal, which, as it heads downstream (southerly), picks up speed and also accumulates water volume.
The greatest manifestation of the divide is during floods. During flood stage, the Casiquiare's main outflow point into the Rio Negro is supplemented by an overflow that is a second, and more minor, entry river bifurcation into the Rio Negro and upstream from its major, common low-water entry confluence with the Rio Negro. At flood, the river becomes an area flow source, far more than a narrow confined river.
The Casiquiare canal connects the upper Orinoco, below the mission of Esmeraldas, with the Rio Negro affluent of the Amazon River near the town of San Carlos.
The simplest description (besides the entire area-floodplain) of the water divide is a "south-bank Orinoco River strip" at the exit point of the Orinoco, also the origin of the Casiquiare canal. However, during the Orinoco's flood stage, that single, simply defined "origin of the canal" is turned into a region, and an entire strip along the southern bank of the Orinoco River.
|
6279
|
88026
|
https://en.wikipedia.org/wiki?curid=6279
|
Capetian dynasty
|
The Capetian dynasty ( ; ), also known as the House of France (), is a dynasty of Frankish origin, and a branch of the Robertians agnatically, and the Karlings through female lines. It is among the largest and oldest royal houses in Europe and the world, and consists of Hugh Capet, the founder of the dynasty, and his male-line descendants, who ruled in France without interruption from 987 to 1792, and again from 1814 to 1848. The senior line from the House of Capet ruled in France from the election of Hugh Capet in 987 until the death of Charles IV in 1328. That line was succeeded by cadet branches, first the House of Valois, and succeeding them the House of Bourbon, which ruled until the French Revolution abolished the monarchy in 1792 and tried and executed King Louis XVI in 1793. The Bourbons were restored in 1814 in the aftermath of Napoleon's defeat, but had to vacate the throne again in 1830 in favor of the last Capetian monarch of France, Louis Philippe I, who belonged to the House of Orléans, a cadet branch of the Bourbons. Cadet branches of the Capetian House of Bourbon are still reigning over Spain and Luxembourg.
The dynasty had a crucial role in the formation of the French state. From a power base initially confined to their own demesne, the Île-de-France, the Capetian kings slowly but steadily increased their power and influence until it grew to cover the entirety of their realm. For a detailed narration on the growth of French royal power, see "Crown lands of France". Members of the dynasty were traditionally Catholic, and the early Capetians had an alliance with the Church. The French were also the most active participants in the Crusades, culminating in a series of five Crusader kings – Louis VII, Philip Augustus, Louis VIII, Louis IX, and Philip III. The Capetian alliance with the papacy suffered a severe blow after the disaster of the Aragonese Crusade. Philip III's son and successor, Philip IV, arrested Pope Boniface VIII and brought the papacy under French control. The later Valois, starting with Francis I, ignored religious differences and allied with the Ottoman sultan to counter the growing power of the Holy Roman Empire. Henry IV was a Protestant at the time of his accession, but realized the necessity of conversion after four years of religious warfare.
The Capetians generally enjoyed a harmonious family relationship. By tradition, younger sons and brothers of the king of France were given appanages for them to maintain their rank and to dissuade them from claiming the French crown itself. When Capetian cadets did aspire for kingship, their ambitions were directed not at the French throne, but at foreign thrones. As a result, the Capetians have reigned at different times in the kingdoms of Portugal, Sicily and Naples, Navarre, Hungary and Croatia, Poland, Spain and Sardinia, grand dukedoms of Lithuania and Luxembourg, and in Latin and Brazilian empires. In modern times, King Felipe VI of Spain is a member of this family, while Grand Duke Henri of Luxembourg is related to the family by agnatic kinship; both through the Bourbon branch of the dynasty. Along with the House of Habsburg, arguably its greatest historic rival, it was one of the two oldest European royal dynasties. It was also one of the most powerful royal families in European history, having played a major role in its politics for much of its existence. According to Oxford University, 75% of all royal families in European history are related to the Capetian dynasty.
Name origins and usage.
The name of the dynasty derives from its founder, Hugh, who was known as "Hugh Capet". The meaning of "Capet" (a nickname rather than a surname of the modern sort) is unknown. While folk etymology identifies it with "cape", other suggestions indicate it might be connected to the Latin word "caput" ("head"), and explain it as meaning "chief" or "head".
Historians in the 19th century (see House of France) came to apply the name "Capetian" to both the ruling house of France and to the wider-spread male-line descendants of Hugh Capet. It was not a contemporary practice. The name "Capet" has also been used as a surname for French royalty, particularly but not exclusively those of the House of Capet. One notable use was during the French Revolution, when the dethroned King Louis XVI (a member of the House of Bourbon and a direct male-line descendant of Hugh Capet) and Queen Marie Antoinette (a member of the House of Habsburg-Lorraine) were referred to as "Louis and Antoinette Capet" (the queen being addressed as "the Widow Capet" after the execution of her husband).
Capetian miracle.
The Capetian miracle () refers to the dynasty's ability to attain and hold onto the French crown.
In 987, Hugh Capet was elected to succeed Louis V of the Carolingian dynasty that had ruled France for over three centuries. By a process of associating elder sons with them in the kingship, the early Capetians established the hereditary succession in their family and transformed a theoretically electoral kingship into a sacral one. By the time of Philip II Augustus, who became king in 1180, the Capetian hold on power was so strong that the practice of associate kingship was dropped. While the Capetian monarchy began as one of the weakest in Europe, drastically eclipsed by the new Anglo-Norman realm in England (who, as dukes of Normandy, were technically their vassals) and even other great lords of France, the political value of orderly succession in the Middle Ages cannot be overstated. The orderly succession of power from father to son over such a long period of time meant that the French monarchs, who originally were essentially just the direct rulers of the Île-de-France, were able to preserve and extend their power, while over the course of centuries the great peers of the realm would eventually lose their power in one succession crisis or another.
By comparison, the Crusader Kingdom of Jerusalem was constantly beset with internal succession disputes because each generation only produced female heirs who tended to die young. Even the English monarchy encountered severe succession crises, such as The Anarchy of the 1120s between Stephen and Matilda, and the murder of Arthur I, Duke of Brittany, the primogeniture heir of Richard I of England. The latter case would deal a severe blow to the prestige of King John, leading to the eventual destruction of Angevin hegemony in France. In contrast, the French kings were able to maintain uncontested father-to-son succession from the time of Hugh Capet until the succession crisis which began the Hundred Years' War of the 14th century.
Capetians through history.
Over the succeeding centuries, Capetians spread throughout Europe, ruling every form of provincial unit from kingdoms to manors.
Salic law.
Salic law, re-established during the Hundred Years' War from an ancient Frankish tradition, caused the French monarchy to permit only male (agnatic) descendants of Hugh to succeed to the throne of France.
Without Salic law, upon the death of John I, the crown would have passed to his half-sister, Joan (later Joan II of Navarre). However, Joan's paternity was suspect due to her mother's adultery in the Tour de Nesle Affair; the French magnates adopted Salic law to avoid the succession of a possible bastard.
In 1328, King Charles IV of France died without male heirs, as his brothers did before him. Philip of Valois, the late king's first cousin, acted as regent, pending the birth of the king's posthumous child, which proved to be a girl. Isabella of France, sister of Charles IV, claimed the throne for her son, Edward III of England. The English king did not find support among the French lords, who made Philip of Valois their king. From then on the French succession not only excluded females but also rejected claims based on the female line of descent.
Thus the French crown passed from the House of Capet after the death of Charles IV to Philip VI of France of the House of Valois, a cadet branch of the Capetian dynasty,
This did not affect monarchies not under that law such as Portugal, Spain, Navarre, and various smaller duchies and counties. Therefore, many royal families appear and disappear in the French succession or become cadet branches upon marriage. A complete list of the senior-most line of Capetians is available below.
Capetian cadet branches.
The Capetian dynasty has been broken many times into (sometimes rival) cadet branches. A cadet branch is a line of descent from another line than the senior-most. This list of cadet branches shows most of the Capetian cadet lines and designating their royal French progenitor, although some sub-branches are not shown.
Senior Capets.
Throughout most of history, the Senior Capet and the King of France were synonymous terms. Only in the time before Hugh Capet took the crown for himself and after the reign of Charles X is there a distinction such that the senior Capet must be identified independently from succession to the French Crown. However, since primogeniture and the Salic law provided for the succession of the French throne for most of French history, here is a list of all the French kings from Hugh until Charles, and all the Legitimist pretenders thereafter. All dates are for seniority, not reign.
King of France:
Legitimist Pretenders:
The Capetian dynasty today.
Many years have passed since the Capetian monarchs ruled a large part of Europe; however, they still remain as kings, as well as other titles. Currently two Capetian monarchs still rule in Spain and Luxembourg. In addition, seven pretenders represent exiled dynastic monarchies in Brazil, France, Spain, Portugal, Parma and Two Sicilies. The current legitimate, senior family member is Louis-Alphonse de Bourbon, known by his supporters as Duke of Anjou, who also holds the Legitimist ("Blancs d'Espagne") claim to the French throne. Overall, dozens of branches of the Capetian dynasty still exist throughout Europe.
Except for the House of Braganza (founded by an illegitimate son of King John I of Portugal, who was himself illegitimate), all current major Capetian branches are of the Bourbon cadet branch. Within the House of Bourbon, many of these lines are themselves well-defined cadet lines of the House.
Family tree.
Male, male-line, legitimate, non-morganatic members of the house who either lived to adulthood, or who held a title as a child, are included. Heads of the house are in bold.
|
6280
|
609725
|
https://en.wikipedia.org/wiki?curid=6280
|
Cuboctahedron
|
A cuboctahedron is a polyhedron with 8 triangular faces and 6 square faces. A cuboctahedron has 12 identical vertices, with 2 triangles and 2 squares meeting at each, and 24 identical edges, each separating a triangle from a square. As such, it is a quasiregular polyhedron, i.e., an Archimedean solid that is not only vertex-transitive but also edge-transitive. It is radially equilateral. Its dual polyhedron is the rhombic dodecahedron.
Construction.
The cuboctahedron can be constructed in many ways:
From all of these constructions, the cuboctahedron has 14 faces: 8 equilateral triangles and 6 squares. It also has 24 edges and 12 vertices.
The Cartesian coordinates for the vertices of a cuboctahedron with edge length formula_1 centered at the origin are:
formula_2
Properties.
Measurement and other metric properties.
The surface area of a cuboctahedron formula_3 can be determined by summing all the area of its polygonal faces. The volume of a cuboctahedron formula_4 can be determined by slicing it off into two regular triangular cupolas, summing up their volume. Given that the edge length formula_5, its surface area and volume are:
formula_6
The dihedral angle of a cuboctahedron can be calculated with the angle of triangular cupolas. The dihedral angle of a triangular cupola between square-to-triangle is approximately 125°, that between square-to-hexagon is 54.7°, and that between triangle-to-hexagon is 70.5°. Therefore, the dihedral angle of a cuboctahedron between square-to-triangle, on the edge where the base of two triangular cupolas are attached is 54.7° + 70.5° approximately 125°. Therefore, the dihedral angle of a cuboctahedron between square-to-triangle is approximately 125°.
Buckminster Fuller found that the cuboctahedron is the only polyhedron in which the distance between its center to the vertex is the same as the length of its edges. In other words, it has the same length vectors in three-dimensional space, known as "vector equilibrium". The rigid struts and the flexible vertices of a cuboctahedron may also be transformed progressively into a regular icosahedron, regular octahedron, regular tetrahedron. Fuller named this the "jitterbug transformation".
A cuboctahedron has the Rupert property, meaning there is a polyhedron of the same or larger size that can pass through its hole.
Symmetry and classification.
The cuboctahedron is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. The cuboctahedron has two symmetries, resulting from the constructions as has mentioned above: the same symmetry as the regular octahedron or cube, the octahedral symmetry formula_7, and the same symmetry as the regular tetrahedron, tetrahedral symmetry formula_8. The polygonal faces that meet for every vertex are two equilateral triangles and two squares, and the vertex figure of a cuboctahedron is 3.4.3.4. The dual of a cuboctahedron is rhombic dodecahedron.
Radial equilateral symmetry.
In a cuboctahedron, the long radius (center to vertex) is the same as the edge length; thus its long diameter (vertex to opposite vertex) is 2 edge lengths. Its center is like the apical vertex of a canonical pyramid: one edge length away from "all" the other vertices. (In the case of the cuboctahedron, the center is in fact the apex of 6 square and 8 triangular pyramids). This radial equilateral symmetry is a property of only a few uniform polytopes, including the two-dimensional hexagon, the three-dimensional cuboctahedron, and the four-dimensional 24-cell and 8-cell (tesseract). "Radially equilateral" polytopes are those that can be constructed, with their long radii, from equilateral triangles which meet at the center of the polytope, each contributing two radii and an edge. Therefore, all the interior elements which meet at the center of these polytopes have equilateral triangle inward faces, as in the dissection of the cuboctahedron into 6 square pyramids and 8 tetrahedra.
Each of these radially equilateral polytopes also occurs as cells of a characteristic space-filling tessellation: the tiling of regular hexagons, the rectified cubic honeycomb (of alternating cuboctahedra and octahedra), the 24-cell honeycomb and the tesseractic honeycomb, respectively. Each tessellation has a dual tessellation; the cell centers in a tessellation are cell vertices in its dual tessellation. The densest known regular sphere-packing in two, three and four dimensions uses the cell centers of one of these tessellations as sphere centers.
Because it is radially equilateral, the cuboctahedron's center is one edge length distant from the 12 vertices.
Configuration matrix.
The cuboctahedron can be represented as a configuration matrix with elements grouped by symmetry transitivity classes. A configuration matrix is a matrix in which the rows and columns correspond to the elements of a polyhedron as in the vertices, edges, and faces. The diagonal of a matrix denotes the number of each element that appears in a polyhedron, whereas the non-diagonal of a matrix denotes the number of the column's elements that occur in or at the row's element.
The cuboctahedron has 1 transitivity class of 12 vertices, 1 class of 24 edges, and 2 classes of faces: 8 triangular and 6 square; each element in a matrix's diagonal. The 24 edges can be seen in 4 central hexagons.
With octahedral symmetry (orbifold 432), the squares have the 4-fold symmetry, triangles the 3-fold symmetry, and vertices the 2-fold symmetry. With tetrahedral symmetry (orbifold 332) the 24 vertices split into 2 edge classes, and the 8 triangles split into 2 face classes. The square symmetry is reduced to 2-fold.
Graph.
The skeleton of a cuboctahedron may be represented as the graph, one of the Archimedean graph. It has 12 vertices and 24 edges. It is quartic graph, which is four vertices connecting each vertex.
The graph of a cuboctahedron may be constructed as the line graph of the cubical graph, making it becomes the locally linear graph.
The 24 edges can be partitioned into 2 sets isomorphic to tetrahedral symmetry. The edges can also be partitioned into 4 hexagonal cycles, representing centrosymmetry, with only opposite vertices and edges in the same transitivity class.
Related polyhedra and honeycomb.
The cuboctahedron shares its skeleton with the two nonconvex uniform polyhedra, the cubohemioctahedron and octahemioctahedron. These polyhedrons are constructed from the skeleton of a cuboctahedron in which the four hexagonal planes bisect its diagonal, intersecting its interior. Adding six squares or eight equilateral triangles results in the cubohemicotahedron or octahemioctahedron, respectively.
The cuboctahedron 2-covers the tetrahemihexahedron, which accordingly has the same abstract vertex figure (two triangles and two squares: formula_9) and half the vertices, edges, and faces. (The actual vertex figure of the tetrahemihexahedron is formula_10, with the formula_11 factor due to the cross.)
The cuboctahedron can be dissected into 6 square pyramids and 8 tetrahedra meeting at a central point. This dissection is expressed in the tetrahedral-octahedral honeycomb where pairs of square pyramids are combined into octahedra.
Appearance.
The cuboctahedron was probably known to Plato: Heron's "Definitiones" quotes Archimedes as saying that Plato knew of a solid made of 8 triangles and 6 squares.
|
6285
|
35839912
|
https://en.wikipedia.org/wiki?curid=6285
|
Cube
|
A cube is a three-dimensional solid object in geometry. A polyhedron, its eight vertices and twelve straight edges of the same length form six square faces of the same size. It is a type of parallelepiped, with pairs of parallel opposite faces with the same shape and size, and is also a rectangular cuboid with right angles between pairs of intersecting faces and pairs of intersecting edges. It is an example of many classes of polyhedra, such as Platonic solids, regular polyhedrons, parallelohedrons, zonohedrons, and plesiohedrons. The dual polyhedron of a cube is the regular octahedron.
The cube can be represented in many ways, such as the cubical graph, which can be constructed by using the Cartesian product of graphs. The cube is the three-dimensional hypercube, a family of polytopes also including the two-dimensional square and four-dimensional tesseract. A cube with unit side length is the canonical unit of volume in three-dimensional space, relative to which other solid objects are measured. Other related figures involve the construction of polyhedra, space-filling and honeycombs, polycubes, as well as cubes in compounds, spherical, and topological space.
The cube was discovered in antiquity, associated with the nature of earth by Plato, for whom the Platonic solids are named. It can be derived differently to create more polyhedra, and it has applications to construct a new polyhedron by attaching others. Other applications include popular culture of toys and games, arts, optical illusions, architectural buildings, as well as natural science and technology.
Properties.
A cube is a special case of rectangular cuboid in which the edges are equal in length. Like other cuboids, every face of a cube has four vertices, each of which connects with three lines of the same length. These edges form square faces, making the dihedral angle of a cube between every two adjacent squares the interior angle of a square, 90°. Hence, the cube has six faces, twelve edges, and eight vertices, and its Euler characteristic is 2, as for any convex polyhedron.
The cube is one of the five Platonic solids—polyhedrons in which all the regular polygons are congruent (same shape and size) and the same number of faces meet at each vertex. Every three square faces surrounding a vertex are orthogonal to each other, so the cube is classified as an orthogonal polyhedron. The cube may also be considered a parallelepiped in which the pairs of the opposite faces are congruent (or more specifically a rhombohedron with edges of the same length), and a trigonal trapezohedron since its square faces are the special cases of rhombi.
Measurement and other metric properties.
Given a cube with edge length formula_1, the face diagonal of the cube is the diagonal of a square formula_2, and the space diagonal of the cube is a line connecting two vertices that is not in the same face, formulated as formula_3. Both formulas can be determined by using the Pythagorean theorem. The surface area of a cube formula_4 is six times the area of a square:
formula_5
The volume of a cuboid is the product of its length, width, and height. Because all the edges of a cube are equal in length, the formula for the volume of a cube is the third power of its side length, leading to the use of the term "cubic" to mean raising any number to the third power:
formula_6
One special case is the unit cube, so named for measuring a single unit of length along each edge. It follows that each face is a unit square and that the entire figure has a volume of 1 cubic unit. Prince Rupert's cube, named after Prince Rupert of the Rhine, is the largest cube that can pass through a hole cut into the unit cube, despite having sides approximately 6% longer. Such a cube can pass through a copy of itself of the same size or smaller. A geometric problem of doubling the cube—alternatively known as the "Delian problem"—requires the construction of a cube with a volume twice the original by using only a compass and straightedge. Ancient mathematicians could not solve this problem until the French mathematician Pierre Wantzel proved it was impossible in 1837.
The cube has three types of closed geodesics, or paths on a cube's surface that are locally straight. In other words, they avoid the vertices, follow line segments across the faces that they cross, and form complementary angles on the two incident faces of each edge that they cross. One type lies in a plane parallel to any face of the cube, forming a square, with the length being equal to the perimeter of a face, four times the length of each edge. Another type lies in a plane perpendicular to the long diagonal, forming a regular hexagon; its length is formula_7 times that of an edge. The third type is a non-planar hexagon.
Relation to the spheres.
With edge length formula_1, the inscribed sphere of a cube is the sphere tangent to the faces of a cube at their centroids, with radius formula_9. The midsphere of a cube is the sphere tangent to the edges of a cube, with radius formula_10. The circumscribed sphere of a cube is the sphere tangent to the vertices of a cube, with radius formula_11.
For a cube whose circumscribed sphere has radius formula_12, and for a given point in its three-dimensional space with distances formula_13 from the cube's eight vertices, it is:
formula_14
Symmetry.
The cube has octahedral symmetry formula_15. There are nine reflection symmetries (where two halves cut by a plane are identical): five cut the cube from the midpoints of its edges, and four are cut diagonally. It also has octahedral rotational symmetry (whereby rotation around the axis results in an identical appearance) formula_16: three axes pass through the centroids of the cube's opposite faces, six through the midpoints of the cube's opposite edges, and four through the cube's opposite vertices; these axes are respectively four-fold rotational symmetry (0°, 90°, 180°, and 270°), two-fold rotational symmetry (0° and 180°), and three-fold rotational symmetry (0°, 120°, and 240°). Its automorphism group has order 48; that is, the cube has 48 isometries.
The dual polyhedron can be obtained from each of the polyhedra's vertices tangent to a plane by a process known as polar reciprocation. One property of dual polyhedra is that the polyhedron and its dual share their three-dimensional symmetry point group. In this case, the dual polyhedron of a cube is the regular octahedron, and both of these polyhedra have the same octahedral symmetry.
The cube is face-transitive, meaning its two squares are alike and can be mapped by rotation and reflection. It is vertex-transitive, meaning all of its vertices are equivalent and can be mapped isometrically under its symmetry. It is also edge-transitive, meaning the same kind of faces surround each of its vertices in the same or reverse order, all two adjacent faces have the same dihedral angle. Therefore, the cube is a regular polyhedron. Each vertex is surrounded by three squares, so the cube is formula_17 by vertex configuration or formula_18 in a Schläfli symbol.
Applications.
Cubes have appeared in many roles in popular culture. It is the most common form of dice. Puzzle toys such as pieces of a Soma cube, Rubik's Cube, and Skewb are built of cubes. "Minecraft" is an example of a sandbox video game of cubic blocks. The outdoor sculpture "Alamo" (1967) is a cube standing on a vertex. Optical illusions such as the impossible cube and Necker cube have been explored by artists such as M. C. Escher. Salvador Dalí's painting "Corpus Hypercubus" (1954) contains a tesseract unfolding into a six-armed cross; a similar construction is central to Robert A. Heinlein's short story "And He Built a Crooked House" (1940). The cube was applied in Alberti's treatise on Renaissance architecture, "De re aedificatoria" (1450). Cube houses in the Netherlands are a set of cubical houses whose hexagonal space diagonals becomes the main floor.
Cubes are also found in natural science and technology. It is applied to the unit cell of a crystal known as a cubic crystal system. Pyrite is an example of a mineral with a commonly cubic shape, although there are many varied shapes. The radiolarian "Lithocubus geometricus", discovered by Ernst Haeckel, has a cubic shape. A historical attempt to unify three physics ideas of relativity, gravitation, and quantum mechanics used the framework of a cube known as a "cGh" cube. Cubane is a synthetic hydrocarbon consisting of eight carbon atoms arranged at the corners of a cube, with one hydrogen atom attached to each carbon atom.
Other technological cubes include the spacecraft device CubeSat, and thermal radiation demonstration device Leslie cube. Cubical grids are usual in three-dimensional Cartesian coordinate systems. In computer graphics, an algorithm divides the input volume into a discrete set of cubes known as the unit on isosurface, and the faces of a cube can be used for mapping a shape.
The Platonic solids are five polyhedra known since antiquity. The set is named for Plato who, in his dialogue "Timaeus", attributed these solids to nature. One of them, the cube, represented the classical element of earth because of its stability. Euclid's "Elements" defined the Platonic solids, including the cube, and showed how to find the ratio of the circumscribed sphere's diameter to the edge length. Following Plato's use of the regular polyhedra as symbols of nature, Johannes Kepler in his "Harmonices Mundi" sketched each of the Platonic solids; he decorated the cube's side with a tree. In his "Mysterium Cosmographicum", Kepler also proposed that the ratios between sizes of the orbits of the planets are the ratios between the sizes of the inscribed and circumscribed spheres of the Platonic solids. That is, if the orbits are great circles on spheres, the sphere of Mercury is tangent to a regular octahedron, whose vertices lie on the sphere of Venus, which is in turn tangent to a regular icosahedron, within the sphere of Earth, within a regular dodecahedron, within the sphere of Mars, within a regular tetrahedron, within the sphere of Jupiter, within a cube, within the sphere of Saturn. In fact, the orbits are not circles but ellipses (as Kepler himself later showed), and these relations are only approximate.
Construction.
An elementary way to construct a cube is using its net, an arrangement of edge-joining polygons, by connecting the edges of those polygons. Eleven nets for the cube are possible.
In analytic geometry, a cube may be constructed using the Cartesian coordinate systems. For a cube centered at the origin, with edges parallel to the axes and with an edge length of 2, the Cartesian coordinates of the vertices are formula_19. Its interior consists of all points formula_20 with formula_21 for all formula_22. A cube's surface with center formula_23 and edge length of formula_24 is the locus of all points formula_25 such that
formula_26
The cube is a Hanner polytope, because it can be constructed by using the Cartesian product of three line segments. Its dual polyhedron, the regular octahedron, is constructed by the direct sum of three line segments.
Representation.
As a graph.
According to Steinitz's theorem, a graph can be represented as the skeleton of a polyhedron. Such a graph has two properties: planar (the edges of a graph are connected to every vertex without crossing other edges), and 3-connected (whenever a graph with more than three vertices, and two of the vertices are removed, the edges remain connected). The skeleton of a cube, represented as the graph, is called the cubical graph, a Platonic graph. It has the same number of vertices and edges as the cube, twelve vertices and eight edges. The cubical graph is also classified as a prism graph, resembling the skeleton of a cuboid.
The cubical graph is a special case of hypercube graph or cube—denoted as formula_27—because it can be constructed by using the Cartesian product of graphs: two graphs connecting the pair of vertices with an edge to form a new graph. In the case of the cubical graph, it is the product of two formula_28; roughly speaking, it is a graph resembling a square. In other words, the cubical graph is constructed by connecting each vertex of two squares with an edge. Notationally, the cubical graph is formula_29. Like any hypercube graph, it has a cycle which visits every vertex exactly once, and it is also an example of a unit distance graph.
The cubical graph is bipartite, meaning every independent set of four vertices can be disjoint and the edges connected in those sets. However, every vertex in one set cannot connect all vertices in the second, so this bipartite graph is not complete. It is an example of both a crown graph and a bipartite Kneser graph.
In orthogonal projection.
An object illuminated by parallel rays of light casts a shadow on a plane perpendicular to those rays, called an orthogonal projection. A polyhedron is considered equiprojective if, for some position of the light, its orthogonal projection is a regular polygon. The cube is equiprojective because, if the light is parallel to one of the four lines joining a vertex to the opposite vertex, its projection is a regular hexagon.
As a configuration matrix.
The cube can be represented as a configuration matrix, a matrix in which the rows and columns correspond to the elements of a polyhedron as the vertices, edges, and faces. The diagonal of a matrix denotes the number of each element that appears in a polyhedron, whereas the non-diagonal of a matrix denotes the number of the column's elements that occur in or at the row's element. The cube's eight vertices, twelve edges, and six faces are denoted by each element in a matrix's diagonal (8, 12, and 6). The first column of the middle row indicates that there are two vertices on each edge, denoted as 2; the middle column of the first row indicates that three edges meet at each vertex, denoted as 3. The following matrix is:
formula_30
Related figures.
Construction of polyhedra.
The cube can appear in the construction of a polyhedron, and some of its types can be derived differently in the following:
The cube can be constructed with six square pyramids, tiling space by attaching their apices. In some cases, this produces the rhombic dodecahedron circumscribing a cube.
Polycubes.
The polycube is a polyhedron in which the faces of many cubes are attached. Analogously, it can be interpreted as the polyominoes in three-dimensional space. When four cubes are stacked vertically, and the other four are attached to the second-from-top cube of the stack, the resulting polycube is the Dali cross, named after Salvador Dali. The Dali cross is a tile space polyhedron, which can be represented as the net of a tesseract. A tesseract is a cube's analogous four-dimensional space bounded by twenty-four squares and eight cubes.
Space-filling and honeycombs.
Hilbert's third problem asks whether every two equal-volume polyhedra can always be dissected into polyhedral pieces and reassembled into each other. If yes, then the volume of any polyhedron could be defined axiomatically as the volume of an equivalent cube into which it could be reassembled. Max Dehn solved this problem by inventing the Dehn invariant, answering that not all polyhedra can be reassembled into a cube. It showed that two equal volume polyhedra should have the same Dehn invariant, except for the two tetrahedra whose Dehn invariants were different.
The cube has a Dehn invariant of zero, meaning that cubes can achieve a honeycomb. It is also a space-filling tile in three-dimensional space in which the construction begins by attaching a polyhedron onto its faces without leaving a gap. The cube is a plesiohedron, a special kind of space-filling polyhedron that can be defined as the Voronoi cell of a symmetric Delone set. The plesiohedra include the parallelohedra, which can be translated without rotating to fill a space in which each face of any of its copies is attached to a like face of another copy. There are five kinds of parallelohedra, one of which is the cuboid. Every three-dimensional parallelohedron is a zonohedron, a centrally symmetric polyhedron whose faces are centrally symmetric polygons. In the case of the cube, it can be represented as a cell. Some honeycombs have cubes as the only cells; one example is the cubic honeycomb, the only regular honeycomb in Euclidean three-dimensional space, which has four cubes around each edge.
Miscellaneous.
The polyhedral compounds, in which the cubes share the same centre, are uniform polyhedron compounds, meaning they are polyhedral compounds whose constituents are identical (although possibly enantiomorphous) uniform polyhedra, in an arrangement that is also uniform. Respectively, the list of compounds enumerated by in the seventh to ninth uniform compounds for the compound of six cubes with rotational freedom, three cubes, and five cubes. Two compounds, consisting of two and three cubes were found in Escher's wood engraving print "Stars" and Max Brückner's book "Vielecke und Vielflache".
The spherical cube represents the spherical polyhedron, which can be modeled by the arc of great circles, creating bounds as the edges of a spherical square.
Hence, the spherical cube consists of six spherical squares with 120° interior angles on each vertex. It has vector equilibrium, meaning that the distance from the centroid and each vertex is the same as the distance from that and each edge. Its dual is the spherical octahedron.
The topological object three-dimensional torus is a topological space defined to be homeomorphic to the Cartesian product of three circles. It can be represented as a three-dimensional model of the cube shape.
|
6286
|
46100911
|
https://en.wikipedia.org/wiki?curid=6286
|
Commuter rail
|
Commuter rail or suburban rail is a passenger rail service that primarily operates within a metropolitan area, connecting commuters to a central city from adjacent suburbs or commuter towns. Commuter rail systems can use locomotive-hauled trains or multiple units, using electric or diesel propulsion. Distance charges or zone pricing may be used.
The term can refer to systems with a wide variety of different features and service frequencies, but is often used in contrast to rapid transit or light rail.
Some services share similarities with both commuter rail and high-frequency rapid transit; examples include German S-Bahn in some cities, the Réseau Express Régional (RER) in Paris, the S Lines in Milan, many Japanese commuter systems, the East Rail line in Hong Kong, and some Australasian suburban networks, such as Sydney Trains. Many commuter rail systems share tracks with other passenger services and freight.
In North America, commuter rail sometimes refers only to systems that primarily operate during rush hour and offer little to no service for the rest of the day, with regional rail being used to refer to systems that offer all-day service.
Characteristics.
Most commuter (or suburban) trains are built to main line rail standards, differing from light rail or rapid transit (metro rail) systems by:
Train schedule.
Compared to rapid transit (or metro rail), commuter/suburban rail often has lower frequency, following a schedule rather than fixed intervals, and fewer stations spaced further apart. They primarily serve lower density suburban areas (non inner-city), generally only having one or two stops in a city's central business district, and often share right-of-way with intercity or freight trains. Some services operate only during peak hours and others use fewer departures during off peak hours and weekends. Average speeds are high, often or higher. These higher speeds better serve the longer distances involved. Some services include express services which skip some stations in order to run faster and separate longer distance riders from short-distance ones.
The general range of commuter trains' travel distance varies between , but longer distances can be covered when the trains run between two or several cities (e.g. S-Bahn in the Ruhr area of Germany). Distances between stations may vary, but are usually much longer than those of urban rail systems. In city centres the train either has a terminal station or passes through the city centre with notably fewer station stops than those of urban rail systems. Toilets are often available on-board trains and in stations.
Track.
Their ability to coexist with freight or intercity services in the same right-of-way can drastically reduce system construction costs. However, they are frequently built with dedicated tracks within that right-of-way to prevent delays, especially where service densities have converged in the inner parts of the network.
Most such trains run on the local standard gauge track. Some systems may run on a narrower or broader gauge. Examples of narrow-gauge systems are found in Japan, Indonesia, Malaysia, Thailand, Taiwan, Switzerland, in the Brisbane (Queensland Rail's City network) and Perth (Transperth) systems in Australia, in some systems in Sweden, and on the in Italy. Some countries and regions, including Finland, India, Pakistan, Russia, Brazil and Sri Lanka, as well as San Francisco (BART) in the US and Melbourne and Adelaide in Australia, use broad gauge track.
Distinction between other modes of rail.
Metro.
Metro rail and rapid transit usually cover smaller inner-urban areas within of city centers, with shorter stop spacing, use rolling stocks with larger standing spaces, lower top speed and higher acceleration, designed for short-distance travel. They also run more frequently, to a headway rather than a published timetable and use dedicated tracks (underground or elevated), whereas commuter rail often shares tracks, technology and the legal framework within mainline railway systems, and uses rolling stocks with more seating and higher speed for comfort on longer city-suburban journeys.
However, the classification as a metro or rapid rail can be difficult as both may typically cover a metropolitan area exclusively, run on separate tracks in the centre, and often feature purpose-built rolling stock. The fact that the terminology is not standardised across countries (even across English-speaking countries) further complicates matters. This distinction is most easily made when there are two (or more) systems such as New York's subway and the LIRR and Metro-North Railroad, Paris' Métro and RER along with Transilien, Washington D.C.'s Metro along with its MARC and VRE, London's tube lines of the Underground and the Overground, Elizabeth line, Thameslink along with other commuter rail operators, Madrid's Metro and Cercanías, Barcelona's Metro and Rodalies, and Tokyo's subway and the JR lines along with various privately owned and operated commuter rail systems.
Regional rail.
Regional rail usually provides rail services between towns and cities, rather than purely linking major population hubs in the way inter-city rail does. Regional rail operates outside major cities. Unlike Inter-city, it stops at most or all stations between cities. It provides a service between smaller communities along the line that are often byproducts of ribbon developments, and also connects with long-distance services at interchange stations located at junctions, terminals, or larger towns along the line. Alternative names are "local train" or "stopping train". Examples include the former BR's Regional Railways, France's TER ("Transport express régional"), Germany's Regionalexpress and Regionalbahn, and South Korea's Tonggeun and Mugunghwa-ho services.
Inter-city rail.
In some European countries, the distinction between commuter trains and long-distance/intercity trains is subtle, due to the relatively short distances involved. For example, so-called "intercity" trains in Belgium and the Netherlands carry many commuters, while their equipment, range, and speeds are similar to those of commuter trains in some larger countries.
The United Kingdom has a privatised rail system, with different routes and services covered by different private operators. The distinction between commuter and intercity rail is not as clear as it was before privatisation (when InterCity existed as a brand of its own), but usually it is still possible to tell them apart. Some operators, for example Thameslink, focus solely on commuter services. Others, such as Avanti West Coast and LNER, run solely intercity services. Others still, such as GWR and EMR, run a mixture of commuter, regional and intercity services. Some of these operators use different branding for different types of service (for example EMR brands its trains as either "InterCity", "Connect" for London commuter services, and "Regional") but even for those operators that do not, the type of train, amenities offered, and stopping pattern, usually tell the services apart.
Russian commuter trains, on the other hand, frequently cover areas larger than Belgium itself, although these are still short distances by Russian standards. They have a different ticketing system from long-distance trains, and in major cities they often operate from a separate section of the train station.
Some consider "inter-city" service to be that which operates as an express service between two main city stations, bypassing intermediate stations. However, this term is used in Australia (Sydney for example) to describe the regional trains operating beyond the boundaries of the suburban services, even though some of these "inter-city" services stop all stations similar to German regional services. In this regard, the German service delineations and naming conventions are clearer and better used for academic purposes.
High-speed rail.
Sometimes high-speed rail can serve daily use of commuters. The Japanese Shinkansen high speed rail system is heavily used by commuters in the Greater Tokyo Area, who commute between by Shinkansen. To meet the demand of commuters, JR sells commuter discount passes. Before 2021, they operated 16-car bilevel E4 Series Shinkansen trains at rush hour, providing a capacity of 1,600 seats. Several lines in China, such as the Beijing–Tianjin Intercity Railway and the Shanghai–Nanjing High-Speed Railway, serve a similar role with many more under construction or planned.
In South Korea, some sections of the high-speed rail network are also heavily used by commuters, such as the section between Gwangmyeong Station and Seoul Station on the KTX network (Gyeongbu HSR Line), or the section between Dongtan Station and Suseo station on the SRT Line.
The high-speed services linking Zurich, Bern and Basel in Switzerland () have brought the Central Business Districts (CBDs) of these three cities within 1 hour of each other. This has resulted in unexpectedly high demand for new commuter trips between the three cities and a corresponding increase in suburban rail passengers accessing the high-speed services at the main city-centre stations (). The Regional-Express commuter service between Munich and Nuremberg in Germany runs at on the Nuremberg–Ingolstadt high-speed railway.
The regional trains Stockholm–Uppsala, Stockholm–Västerås, Stockholm–Eskilstuna and Gothenburg–Trollhättan in Sweden reach and have many daily commuters.
In Great Britain, the HS1 domestic services between London and Ashford runs at a top speed of 225 km/h, and in peak hours the trains can be full with commuters standing.
The Athens Suburban Railway in Greece consists of five lines, 4 of which are electrified. The Kiato–Piraeus line and the Aigio–Airport lines reach speeds of up to . The Athens–Chalcis line is also expected to attain speeds of up to upon upgrading of the SKA–Oinoi railway sector. These lines also have many daily commuters, with the number expected to rise even higher upon full completion of the Acharnes Railway Center.
Eskişehir-Ankara and Konya-Ankara high speed train routes serve as high speed commuter trains in Turkey.
Train types.
Commuter/suburban trains are usually optimized for maximum passenger volume, in most cases without sacrificing too much comfort and luggage space, though they seldom have all the amenities of long-distance trains. Cars may be single- or double-level, and aim to provide seating for all. Compared to intercity trains, they have less space, fewer amenities and limited baggage areas.
Multiple unit type.
Commuter rail trains are usually composed of multiple units, which are self-propelled, bidirectional, articulated passenger rail cars with driving motors on each (or every other) bogie. Depending on local circumstances and tradition they may be powered either by diesel engines located below the passenger compartment (diesel multiple units) or by electricity picked up from third rails or overhead lines (electric multiple units). Multiple units are almost invariably equipped with control cabs at both ends, which is why such units are so frequently used to provide commuter services, due to the associated short turn-around time.
Locomotive hauled services.
Locomotive hauled services are used in some countries or locations. This is often a case of asset sweating, by using a single large combined fleet for intercity and regional services. Loco hauled services are usually run in push-pull formation, that is, the train can run with the locomotive at the "front" or "rear" of the train (pushing or pulling). Trains are often equipped with a control cab at the other end of the train from the locomotive, allowing the train operator to operate the train from either end. The motive power for locomotive-hauled commuter trains may be either electric or diesel–electric, although some countries, such as Germany and some of the former Soviet-bloc countries, also use diesel–hydraulic locomotives.
Seat plans.
In the US and some other countries, a three-and-two seat plan is used. Middle seats on these trains are often less popular because passengers feel crowded and uncomfortable.
In Japan, South Korea and Indonesia, longitudinal (sideways window-lining) seating is widely used in many commuter rail trains to increase capacity in rush hours. Carriages are usually not organized to increase seating capacity (although in some trains at least one carriage would feature more doors to facilitate easier boarding and alighting and bench seats so that they can be folded up during rush hour to provide more standing room) even in the case of commuting longer than 50 km and commuters in the Greater Tokyo Area, Seoul metropolitan area, and Jabodetabek area have to stand in the train for more than an hour.
Commuter rail systems around the world.
Africa.
Currently there are not many examples of commuter rail in Africa. Metrorail operates in the major cities of South Africa, and there are some commuter rail services in Algeria, Botswana, Kenya, Morocco, Egypt and Tunisia.
In Algeria, SNTF operates commuter rail lines between the capital Algiers and its southern and eastern suburbs. They also serve to connect Algiers' main universities to each other. The Dar es Salaam commuter rail offers intracity services in Dar es Salaam, Tanzania. In Botswana, the (Botswana Railways) "BR Express" has a commuter train between Lobatse and Gaborone.
Asia.
East Asia.
In Japan, commuter rail systems have extensive network and frequent service and are heavily used. In many cases, Japanese commuter rail is operationally more like a typical metro system (frequent trains, an emphasis on standing passengers, short station spacings) than it is like commuter rail in other countries. Japanese commuter rail commonly interline with city center subway lines, with commuter rail trains continuing into the subway network, and then out onto different commuter rail systems on the other side of the city. Many Japanese commuter systems operate various stopping patterns to reduce the travel time to distant locations, often using station passing loops instead of dedicated express tracks. It is notable that the larger Japanese commuter rail systems are owned and operated by for-profit private railway companies, without public subsidy.
East Japan Railway Company operates a large suburban train network in Tokyo with various lines connecting the suburban areas to the city center. While the Yamanote Line, Keihin Tohoku Line, Chūō–Sōbu Line services arguably are more akin to rapid transit with frequent stops, simple stopping patterns (relative to other JR East lines) no branching services and largely serving the inner suburbs; other services along the Chūō Rapid Line, Sōbu Rapid Line/Yokosuka Line, Ueno–Tokyo Line, Shōnan–Shinjuku Line etc. are mid-distance services from suburban lines in the outer reaches of Greater Tokyo through operating into these lines to form a high frequency corridor though central Tokyo.
Other commuter rail routes in Japan include:
Commuter rail systems have been inaugurated in several cities in China such as Beijing, Shanghai, Zhengzhou, Wuhan, Changsha and the Pearl River Delta. With plans for large systems in northeastern Zhejiang, Jingjinji, and Yangtze River Delta areas. The level of service varies considerably from line to line ranging high to near high speeds. More developed and established lines such as the Guangshen Railway have more frequent metro-like service.
The two MTR lines which are owned and formerly operated by the Kowloon-Canton Railway Corporation (East Rail line and Tuen Ma line which is integrated from the former West Rail line and Ma On Shan line in 2021), then the "KCR"), and MTR's own Tung Chung line connect the new towns in New Territories and the city centre Kowloon together with frequent intervals, and some New Territories-bound trains terminate at intermediate stations, providing more frequent services in Kowloon and the towns closer to Kowloon. They use rolling stocks with a faster maximum speed and have longer stop spacing compared to other lines which only run in the inner urban area, but in order to maximise capacity and throughput, these rolling stocks have longitudinal seatings, 5 pairs of doors in each carriage with large standing spaces like the urban lines, and run as frequent as well. Most of the sections of these four lines are overground and some sections of the East Rail Line share tracks with intercity trains to mainland China. The three KCR lines are integrated into the MTR network since 2008 and most passengers do not need to exit and re-enter the system through separate fare gates and purchase separate tickets to transfer between such lines and the rest of the network (the exceptions are between the Tuen Ma line's East Tsim Sha Tsui station and the Tsuen Wan line's Tsim Sha Tsui station.
In Taiwan, the Western line in the Taipei-Taoyuan Metropolitan Area, Taichung Metropolitan Area and Tainan-Kaohsiung Metropolitan Area as well as the Neiwan-Liujia line in the Hsinchu Area are considered commuter rail.
In South Korea, the Seoul Metropolitan Subway includes a total of 22 lines, and some of its lines are suburban lines. This is especially the case for lines operated by Korail, such as the Gyeongui-Jungang Line, the Gyeongchun Line, the Suin-Bundang Line, or the Gyeonggang Line. Even some lines not operated by Korail, such as the AREX Line, the Seohae Line or the Shinbundang Line mostly function as commuter rail. Lastly, even for the "numbered lines" (1–9) of the Seoul Metropolitan Subway which mostly travel in the dense parts of Seoul, some track sections extend far outside of the city, and operate large sections at ground level, such as on the Line 1, Line 3 and Line 4. In Busan, the Donghae Line, while part of the Busan Metro system, mostly functions as a commuter rail line.
Southeast Asia.
In Indonesia, the KRL Commuterline is the largest commuter rail system in the country, serving the Greater Jakarta. It connects the Jakarta city center with surrounding cities and sub-urbans in Banten and West Java provinces, including Depok, Bogor, Tangerang, Serpong, Rangkasbitung, Bekasi and Cikarang. In July 2015, KRL Commuterline served more than 850,000 passengers per day, which is almost triple of the 2011 figures, but still less than 3.5% of all Jabodetabek commutes. Other commuter rail systems in Indonesia include the Metro Surabaya Commuter Line, Commuter Line Bandung, KAI Commuter Yogyakarta–Solo Line, Kedung Sepur, and the Sri Lelawangsa.
In the Philippines, the Philippine National Railways has two commuter rail systems currently operational; the PNR Metro Commuter Line in the Greater Manila Area and the PNR Bicol Commuter in the Bicol Region. A new commuter rail line in Metro Manila, the North–South Commuter Railway, is currently under construction, with completion targeted for 2031.
In Malaysia, there are two commuter services operated by Keretapi Tanah Melayu. They are the KTM Komuter that serves Kuala Lumpur and the surrounding Klang Valley area, and the KTM Komuter Northern Sector that serves the George Town Conurbation, Perak, Kedah and Perlis in the northern region of Peninsular Malaysia.
In Thailand, the Greater Bangkok Commuter rail and the Airport Rail Link serve the Bangkok Metropolitan Region. The SRT Red Lines, a new commuter line in Bangkok, started construction in 2009. It opened in 2021.
Another commuter rail system in Southeast Asia is the Yangon Circular Railway in Myanmar.
South Asia.
In India, commuter rail systems are present in major cities and form an important part of people's daily lives. Mumbai Suburban Railway, the oldest suburban rail system in Asia, carries more than 7.24 million commuters on a daily basis which constitutes more than half of the total daily passenger capacity of the Indian Railways itself. Kolkata Suburban Railway, one of the largest suburban railway networks in the world, consists of more than 450 stations and carries more than 3.5 million commuters per day. The Chennai Suburban Railway along with the Chennai MRTS, also covers over 300 stations and carries more than 2.5 million people daily to different areas in Chennai and its surroundings. Other commuter railways in India include the Hyderabad MMTS, Delhi Suburban Railway, Pune Suburban Railway and Lucknow-Kanpur Suburban Railway.
In 2020, Government of India approved Bengaluru Suburban Railway to connect Bengaluru and its suburbs. It will be unique and first of its kind in India as it will have metro like facilities and rolling stock.
In Bangladesh, there is one suburban rail called the Chittagong Circular Railway. Another suburban railway called the Dhaka Circular Railway is currently proposed.
Karachi in Pakistan has a circular railway since 1969.
West Asia.
Tehran Metro currently operates the Line 5 commuter line between Tehran and Karaj.
Turkey has commuter rail in the cities of Ankara, Izmir, Istanbul and Gaziantep.
Europe.
Major metropolitan areas in most European countries are usually served by extensive commuter/suburban rail systems. Well-known examples include BG Voz in Belgrade (Serbia), S-Bahn in Germany, Austria and German-speaking areas of Switzerland, Proastiakos in Greece, RER in France and Belgium, Servizio ferroviario suburbano in Italy, Cercanías and Rodalies (Catalonia) in Spain, CP Urban Services in Portugal, Esko in Prague and Ostrava (Czech Republic), HÉV in Budapest (Hungary) and DART in Dublin (Ireland).
Western Europe.
London has multiple commuter rail routes:
The Merseyrail network in Liverpool consists of two commuter rail routes powered by third rail, both of which branch out at one end. At the other, the Northern line continues out of the city centre to a mainline rail interchange, while the Wirral line has a city-centre loop.
Birmingham has four suburban routes which operate out of Birmingham New Street & Birmingham Moor Street stations, one of which is operated using diesel trains.
The Tyneside Electrics system in Newcastle upon Tyne existed from 1904 to 1967 using DC third rail. British Rail did not have the budget to maintain the ageing electrification system. The Riverside Branch was closed, while the remaining lines were de-electrified. 13 years later, they were re-electrified using DC overhead wires, and now form the Tyne & Wear Metro Yellow Line.
Many of the rail services around Glasgow are branded as Strathclyde Partnership for Transport. The network includes most electrified Scottish rail routes.
The West Yorkshire Passenger Transport Executive run eleven services which feed into Leeds, connecting the city with commuter areas and neighbouring urban centres in the West Yorkshire Built-up Area.
MetroWest is a proposed network in Bristol, northern Somerset & southern Gloucestershire. The four-tracking of the line between Bristol Temple Meads and Bristol Parkway stations will enable local rail services to be separated from long-distance trains.
The Réseau express régional d'Île-de-France (RER) is a commuter rail network in the agglomeration of Paris. In the centre the RER has high frequency underground corridors where several suburban branches feed similar to a rapid transit system.
Commuter rail systems in German-speaking regions are called S-Bahn. While in some major cities S-Bahn services run on separate lines exclusively other systems use the existing regional rail tracks.
Randstadspoor is a network of Sprinter train services in and around the city of Utrecht in the Netherlands. For the realisation of this network, new stations were opened. Separate tracks have been built for these trains, so they can call frequently without disturbing high-frequent Intercity services parallel to these routes. Similar systems are planned for The Hague and Rotterdam.
Northern Europe.
In Sweden, electrified commuter rail systems known as "Pendeltåg" are present in the cities of Stockholm and Gothenburg. The Stockholm commuter rail system, which began in 1968, shares railway tracks with inter-city trains and freight trains, but for the most part runs on its own dedicated tracks. It is primarily used to transport passengers from nearby towns and other suburban areas into the city centre, not for transportation inside the city centre. The Gothenburg commuter rail system, which began in 1960, is similar to the Stockholm system, but does fully share tracks with long-distance trains.
In Norway, the Oslo commuter rail system is from 2022 more limited but the remaining commuter lines go on tracks mostly not much used by other trains. From 2022 several lines with hourly frequency and travel times to endpoints of over one hour are redefined as regional trains. Before 2022 Oslo had the largest commuter rail system in the Nordic countries in terms of line lengths and number of stations. Also Bergen, Stavanger and Trondheim have commuter rail systems. These have only one or two lines each and they share tracks with other trains.
In Finland, the Helsinki commuter rail network runs on dedicated tracks from Helsinki Central railway station to Leppävaara and Kerava. The Ring Rail Line serves Helsinki Airport and northern suburbs of Vantaa and is exclusively used by the commuter rail network. On 15 December 2019, the Tampere region got its own commuter rail service, with trains running from Tampere to Nokia, Lempäälä and Orivesi.
Southern Europe.
In Spain, "Cercanías" networks exist in Madrid, Sevilla, Murcia/Alicante, San Sebastián, Cádiz, Valencia, Asturias, Santander, Zaragoza, Bilbao and Málaga. All these systems include underground sections in the city centre. There is also a network of narrow-gauge commuter systems in North Spain and Murcia.
Cercanías Madrid is one of the most important train services in the country, more than 900,000 passengers move in the system. It has underground stations in Madrid like Recoletos, Sol or Nuevos Ministerios and in the metropolitan area in cities like Parla or Getafe.
In the autonomous community of Catalonia, and unlike the rest of Spain, the commuter service is not managed by Renfe Operadora. Since 2010, the Government of Catalonia has managed all the regular commuter services with the "transfer of "Rodalies"". There are two companies that manage the Catalan commuter network:
Since 2024, the Government of Catalonia has full control of the current R12 regional line and it is now owned by the FGC. It will eliminate the current line and replace it with the new commuter lines RL3 and RL4, towards Cervera and Manresa from Lleida respectively.
In Italy fifteen cities have commuter rail systems:
Eastern Europe.
In Poland, commuter rail systems exist in Tricity, Warsaw, Kraków (SKA) and Katowice (SKR). There is also a similar system planned in Wrocław and Szczecin. The terms used are "Szybka Kolej Miejska" (fast urban rail) and "kolej aglomeracyjna" (agglomeration rail). These systems are:
The Proastiakos (; "suburban") is Greece's suburban railway (commuter rail) services, which are run by TrainOSE, on infrastructure owned by the Hellenic Railways Organisation (OSE). There are three Proastiakos networks, servicing the country's three largest cities: Athens, Thessaloniki and Patras. In particular, the Athenian network is undergoing modifications to completely separate it from mainline traffic, by re-routing the tracks via a tunnel underneath the city center. A similar project is planned for the Patras network, whereas a new line is due to be constructed for the Thessalonian network.
In Romania, the first commuter trains were introduced in December 2019. They operate between Bucharest and Funduea or Buftea.
BG Voz is an urban rail system that serves Belgrade. It currently has only two routes, with plans for further expansion. Between the early 1990s and mid-2010s, there was another system, known as Beovoz, that was used to provide mass-transit service within the Belgrade metropolitan area, as well as to nearby towns, similarly to RER in Paris. Beovoz had more lines and far more stops than the current system. However, it was abandoned in favor of more accurate BG Voz, mostly due to inefficiency. While current services rely mostly on the existing infrastructure, any further development means furthering capacities (railways expansion and new trains). Plans for further extension of system include another two lines, one of which should reach Belgrade Nikola Tesla Airport.
In Russia, Ukraine and some other countries of the former Soviet Union, electrical multiple unit passenger suburban trains called Elektrichka are widespread. The first such system in Russia is the Oranienbaum Electric Line in St. Petersburg. In Moscow the Beskudnikovskaya railway branch existed between the 1940s and 1980s. The trains that shuttled along it did not go to the main lines, so it was a city transport. Today there are the Moscow Central Circle and the Moscow Central Diameters.
In Turkey, Marmaray line stations from Sirkeci to Halkalı are located at the European side.
Americas.
North America.
In the United States, Canada, Costa Rica, El Salvador and Mexico regional passenger rail services are provided by governmental or quasi-governmental agencies, with the busiest and most expansive rail networks located in the Northeastern US, California, and Eastern Canada. Most North American commuter railways utilize diesel locomotive propulsion, with the exception of services in New York City, Philadelphia, Chicago, Denver, San Francisco, and Mexico City; New York's commuter rail lines use a combination of third rail and overhead wire power generation, while Chicago only has two out of twelve services that are electrified. Many newer and proposed systems in Canada and the United States are often are geared to serving peak-hour commutes as opposed to the all-day systems of Europe, East Asia, and Australia.
United States.
Eight commuter rail systems in the United States carried over ten million trips each in 2018, those being in descending order:
Other commuter rail systems in the United States (not in ridership order) are:
South America.
Examples include an commuter system in the Buenos Aires metropolitan area, the long Supervia in Rio de Janeiro, the Metrotrén in Santiago, Chile, and the Valparaíso Metro in Valparaíso, Chile.
Another example is Companhia Paulista de Trens Metropolitanos (CPTM) in Greater São Paulo, Brazil. CPTM has 94 stations with seven lines, numbered starting on 7 (the lines 1 to 6 and the line 15 belong to the São Paulo Metro), with a total length of . Trains operates at high frequencies on tracks used exclusively for commuter traffic. In Rio de Janeiro SuperVia provides electrified commuter rail services.
Oceania.
The five major cities in Australia have suburban railway systems in their metropolitan areas. These networks have frequent services, with frequencies varying from every 10 to every 30 minutes on most suburban lines, and up to 3–5 minutes in peak on bundled underground lines in the city centres of Sydney, Brisbane, Perth and Melbourne. The networks in each state developed from mainline railways and have never been completely operationally separate from long distance and freight traffic, unlike metro systems. The suburban networks are almost completely electrified.
The main suburban rail networks in Australia are:
New Zealand has two frequent suburban rail services comparable to those in Australia: the Auckland rail network is operated by Auckland One Rail and the Wellington rail network is operated by Transdev Wellington.
Hybrid systems.
Hybrid urban-suburban rail systems exhibiting characteristics of both rapid transit and commuter rail serving a metropolitan region are common in German-speaking countries, where they are known as S-Bahn. Other examples include: Lazio regional railways in Rome, the RER in France and the Elizabeth line, London Underground Metropolitan line, London Overground and Merseyrail in the UK. Comparable systems can be found in Australia such as Sydney Trains and Metro Trains Melbourne, and in Japan with many urban and suburban lines operated by JR East/West and third-party companies running at metro-style frequencies. In contrast, comparable systems of this type are generally rare in the United States and Canada, where peak hour frequencies are more common.
In Asia, the construction of higher speed urban-suburban rail links has gained traction in various countries, such as in India, with the Delhi RRTS, in China, with the Pearl River Delta Metropolitan Region intercity railway, and in South Korea, with the Great Train eXpress system. These systems usually run on dedicated elevated or underground tracks for most of their route and have features comparable to Higher-speed rail.
|
6288
|
40193331
|
https://en.wikipedia.org/wiki?curid=6288
|
Cambridgeshire
|
Cambridgeshire (abbreviated Cambs.) is a ceremonial county in the East of England and East Anglia. It is bordered by Lincolnshire to the north, Norfolk to the north-east, Suffolk to the east, Essex and Hertfordshire to the south, Northamptonshire to the west, and Bedfordshire to the south-west. The largest settlement is the city of Peterborough, and the city of Cambridge is the county town.
The county has an area of and had an estimated population of 906,814 in 2022. Peterborough, in the north-west, and Cambridge, in the south, are by far the largest settlements. The remainder of the county is rural, and contains the city of Ely in the east, Wisbech in the north-east, and St Neots and Huntingdon in the west. For local government purposes Cambridgeshire comprises a non-metropolitan county, with five districts, and the unitary authority area of Peterborough; their local authorities collaborate through Cambridgeshire and Peterborough Combined Authority. The county did not historically include Huntingdonshire or the Soke of Peterborough, which was part of Northamptonshire.
The north and east of the county are dominated by the Fens, an extremely flat, drained marsh maintained by drainage ditches and dykes; Holme Fen is the UK's lowest physical point, at 2.75 m (9 ft) below sea level. The flatness of the landscape makes the few areas of higher ground, such as that Ely is built on, very conspicuous. The landscape in the south and west is gently undulating. Cambridgeshire's principal rivers are the Nene, which flows through the north of the county and is canalised east of Peterborough; the Great Ouse, which flows from west to east past Huntingdon and Ely; and the Cam, a tributary of the Great Ouse which flows through Cambridge.
History.
Cambridgeshire is noted as the site of Flag Fen in Fengate, one of the earliest-known Neolithic permanent settlements in the United Kingdom, compared in importance to Balbridie in Aberdeen, Scotland. Must Farm quarry, at Whittlesey, has been described as "Britain's Pompeii due to its relatively good condition, including the 'best-preserved Bronze Age dwellings ever found in the UK'". A great quantity of archaeological finds from the Stone Age, the Bronze Age, and the Iron Age were made in East Cambridgeshire. Most items were found in Isleham.
The area was settled by the Anglo-Saxons starting in the fifth century. Genetic testing on seven skeletons found in Anglo-Saxon era graves in Hinxton and Oakington found that five were either migrants or descended from migrants from the continent, one was a native Briton, and one had both continental and native ancestry, suggesting intermarriage.
Cambridgeshire was recorded in the "Domesday Book" as "Grantbridgeshire" (or rather ) (related to the river Granta). Covering a large part of East Anglia, Cambridgeshire today is the result of several local government unifications. In 1888 when county councils were introduced, separate councils were set up, following the traditional division of Cambridgeshire, for
In 1965, these two administrative counties were merged to form Cambridgeshire and the Isle of Ely.
Under the Local Government Act 1972 this merged with the county to the west, Huntingdon and Peterborough, which had been formed in 1965, by the merger of Huntingdonshire with the Soke of Peterborough (the latter previously a part of Northamptonshire with its own county council). The resulting county was called simply Cambridgeshire.
Since 1998, the City of Peterborough has been separately administered as a unitary authority area. It is associated with Cambridgeshire for ceremonial purposes such as Lieutenancy and joint functions such as policing and the fire service.
In 2002, the conservation charity Plantlife unofficially designated Cambridgeshire's county flower as the Pasqueflower.
The Cambridgeshire Regiment (nicknamed the Fen Tigers), the county-based army unit, fought in the Boer War in South Africa, the First World War and Second World War.
Due to the county's flat terrain and proximity to the continent, during the Second World War the military built many airfields here for RAF Bomber Command, RAF Fighter Command, and the allies USAAF. In recognition of this collaboration, the Cambridge American Cemetery and Memorial is located in Madingley. It is the only WWII burial ground in England for American servicemen who died during that event.
Most English counties have nicknames for their people, such as a "Tyke" from Yorkshire and a "Yellowbelly" from Lincolnshire. The historical nicknames for people from Cambridgeshire are "Cambridgeshire Camel" or "Cambridgeshire Crane", the latter referring to the wildfowl that were once abundant in the Fens. The term "Fen Tigers" is sometimes used to describe the people who live and work in the Fens.
Original historical documents relating to Cambridgeshire are held by Cambridgeshire Archives. Cambridgeshire County Council Libraries maintains several Local Studies collections of printed and published materials, significantly at the Cambridgeshire Collection held in the Cambridge Central Library.
Flag.
Cambridgeshire's county flag was selected as an entry from a design competition that ran during 2014. The design features three golden crowns, two on the top, one on the bottom that are separated by two wavy lines in the middle. The crowns are meant to represent East Anglia, and the two lines represent the River Cam and are in the Cambridge University's colours.
"See also Geology of Cambridgeshire"
Geography.
Large areas of the county are extremely low-lying and Holme Fen is notable for being the UK's lowest physical point at 2.75 m (9 ft) below sea level. The highest point of the modern administrative county is in the village of Great Chishill at 146 m (480 ft) above sea level. However, this parish was historically a part of Essex, having been moved to Cambridgeshire in boundary changes in 1895. The historic county top is close to the village of Castle Camps where a point on the disused RAF airfield reaches a height of above sea level (grid reference TL 63282 41881).
Other prominent hills are Little Trees Hill and Wandlebury Hill (both at ) in the Gog Magog Hills, Rivey Hill above Linton, Rowley's Hill and the Madingley Hills.
Wicken Fen is a biological Site of Special Scientific Interest west of Wicken. A large part of it is owned and managed by the National Trust.
The Cambridge Green Belt around the city of Cambridge extends to places such as Waterbeach, Lode, Duxford, Little & Great Abington and other communities a few miles away in nearby districts, to afford a protection from the conurbation. It was first drawn up in the 1950s.
Politics.
Cambridgeshire County Council is controlled by the Liberal Democrats, while Peterborough City Council is currently controlled by a Conservative Party minority administration.
The county contains eight Parliamentary constituencies:
Economy.
This is a chart of trend of regional gross value added of Cambridgeshire at current basic prices published (pp. 240–253) by "Office for National Statistics" with figures in millions of British Pounds Sterling.
AWG plc is based in Huntingdon. The RAF has several stations in the Huntingdon and St Ives area. RAF Alconbury, three miles north of Huntingdon, is being reorganised after a period of obsolescence following the departure of the USAF, to be the focus of RAF/USAFE intelligence operations, with activities at Upwood and Molesworth being transferred there. Most of Cambridgeshire is agricultural. Close to Cambridge is the so-called Silicon Fen area of high-technology (electronics, computing and biotechnology) companies. ARM Limited is based in Cherry Hinton. The inland Port of Wisbech on the River Nene is the county's only remaining port.
Education.
Primary and secondary.
Cambridgeshire has a comprehensive education system with over 240 state schools, not including sixth form colleges. The independent sector includes King's Ely and Wisbech Grammar School, founded in 970 and 1379 respectively, they are two of the oldest schools in the country.
Some of the secondary schools act as Village Colleges, institutions unique to Cambridgeshire. For example, Comberton Village College.
Tertiary.
Cambridgeshire is home to a number of institutes of higher education:
In addition, Cambridge Regional College and Huntingdonshire Regional College both offer a limited range of higher education courses in conjunction with partner universities.
Settlements.
These are the settlements in Cambridgeshire with a town charter, city status or a population over 5,000; for a complete list of settlements see list of places in Cambridgeshire.
See the List of Cambridgeshire settlements by population page for more detail.
The town of Newmarket is surrounded on three sides by Cambridgeshire, being connected by a narrow strip of land to the rest of Suffolk.
Cambridgeshire has seen 32,869 dwellings created from 2002 to 2013 and there are a further 35,360 planned new dwellings between 2016 and 2023.
Climate.
Cambridgeshire has a maritime temperate climate which is broadly similar to the rest of the United Kingdom, though it is drier than the UK average due to its low altitude and easterly location, the prevailing southwesterly winds having already deposited moisture on higher ground further west. Average winter temperatures are cooler than the English average, due to Cambridgeshire's inland location and relative nearness to continental Europe, which results in the moderating maritime influence being less strong. Snowfall is slightly more common than in western areas, due to the relative winter coolness and easterly winds bringing occasional snow from the North Sea. In summer temperatures are average or slightly above, due to less cloud cover. It reaches on around ten days each year, and is comparable to parts of Kent and East Anglia.
Culture.
Sports.
Various forms of football have been popular in Cambridgeshire since medieval times at least. In 1579 one match played at Chesterton between townspeople and University of Cambridge students ended in a violent brawl that led the Vice-Chancellor to issue a decree forbidding them to play "footeball" outside of college grounds. During the nineteenth century, several formulations of the laws of football, known as the Cambridge rules, were created by students at the university. One of these codes, dating from 1863, had a significant influence on the creation of the original laws of the Football Association.
Cambridgeshire is also the birthplace of bandy, now an IOC accepted sport. According to documents from 1813, Bury Fen Bandy Club was undefeated for 100 years. A member of the club, Charles Goodman Tebbutt, wrote down the first official rules in 1882. Tebbutt was instrumental in spreading the sport to many countries. Great Britain Bandy Association is based in Cambridgeshire.
Fen skating is a traditional form of skating in the Fenland. The National Ice Skating Association was set up in Cambridge in 1879, they took the top Fen skaters to the world speedskating championships where James Smart (skater) became world champion.
On 6–7 June 2015, the inaugural Tour of Cambridgeshire cycle race took place on closed roads across the county. The event was an official UCI qualification event, and consisted of a Time Trial on the 6th, and a Gran Fondo event on the 7th. The Gran Fondo event was open to the public, and over 6000 riders took part in the race.
The River Cam is the main river flowing through Cambridge, parts of the River Nene and River Great Ouse lie within the county. In 2021 the latter was used as the course for The Boat Race. The River Cam serves as the course for the university Lent Bumps and May Bumps and the non-college rowing organised by Cambridgeshire Rowing Association.
There is only one racecourse in Cambridgeshire, located at Huntingdon.
Contemporary art.
Cambridge is home to the Kettle's Yard gallery and the artist-run Aid and Abet project space. Nine miles west of Cambridge next to the village of Bourn is Wysing Arts Centre.
Wisbech has been home to the Wisbech Gallery, South Brink since 2023.
Cambridge Open Studios is the region's large arts organisation with over 500 members. Every year, more than 370 artists open their doors to visitors during four weekends in July.
Literature.
The annual Fenland Poet Laureate awards were instigated for poets in the North of the county in 2012 at Wisbech & Fenland Museum.
Theatre.
The county was visited by travelling companies of comedians in the Georgian period. These came from different companies. The Lincoln Circuit included, at various times, Wisbech and Whittlesey. The Wisbech Georgian theatre still survives as an operating theatre now known as The Angles Theatre.
In Cambridge the ADC Theatre is the venue for the Footlights.
Media.
The county is covered by BBC East and ITV Anglia. Local radio includes BBC Radio Cambridgeshire, Greatest Hits Radio East, Heart East, Smooth East Midlands (only covering Peterborough), and Star Radio. The community radio stations are Black Cat Radio in St Neots; Cam FM and Cambridge 105 in Cambridge; Huntingdon Community Radio; and Peterborough Community Radio and Salaam Radio in Peterborough.
|
6290
|
28481209
|
https://en.wikipedia.org/wiki?curid=6290
|
Christian Goldbach
|
Christian Goldbach ( , ; 18 March 1690 – 20 November 1764) was a Prussian mathematician connected with some important research mainly in number theory; he also studied law and took an interest in and a role in the Russian court. After traveling around Europe in his early life, he landed in Russia in 1725 as a professor at the newly founded Saint Petersburg Academy of Sciences. Goldbach jointly led the academy in 1737. However, he relinquished duties in the academy in 1742 and worked in the Russian Ministry of Foreign Affairs until his death in 1764. He is remembered today for Goldbach's conjecture and the Goldbach–Euler Theorem. He had a close friendship with famous mathematician Leonhard Euler, serving as inspiration for Euler's mathematical pursuits.
Biography.
Early life.
Born in the Duchy of Prussia's capital Königsberg, part of Brandenburg-Prussia, Goldbach was the son of a pastor. He studied at the Royal Albertus University. After finishing his studies he went on long educational trips from 1710 to 1724 through Europe, visiting other German states, England, the Netherlands, Italy, and France, meeting with many famous mathematicians, such as Gottfried Leibniz, Leonhard Euler, and Nicholas I Bernoulli. These acquaintances started Goldbach's interest in mathematics. He briefly attended Oxford University in 1713 and, while he was there, Goldbach studied mathematics with John Wallis and Isaac Newton. Also, Goldbach's travels fostered his interest in philology, archaeology, metaphysics, ballistics, and medicine. Between 1717 and 1724, Goldbach published his first few papers which, while minor, credited his mathematical ability. Back in Königsberg, he became acquainted with Georg Bernhard Bilfinger and Jakob Hermann.
Saint Petersburg Academy of Sciences.
Goldbach followed Bilfinger and Hermann to the newly opened St. Petersburg Academy of Sciences in 1725. Christian Wolff had invited and had written recommendations for all the Germans who traveled to Saint Petersburg for the academy except Goldbach. Goldbach wrote to the president-designate of the academy, petitioning for a position in the academy, using his past publications and knowledge in medicine and law as qualifications. Goldbach was then hired to a five-year contract as a professor of mathematics and historian of the academy. As historian of the academy, he recorded each academy meeting from the opening of the school in 1725 until January 1728. Goldbach worked with famous mathematicians like Leonhard Euler, Daniel Bernoulli, Johann Bernoulli, and Jean le Rond d'Alembert. Goldbach also played a part in Euler's decision to academically pursue mathematics instead of medicine, cementing mathematics as the premier research field of the academy in the 1730s.
Russian government work.
In 1728, when Peter II became Tsar of Russia, Goldbach became Peter II and Anna's, Peter II's cousin, tutor. Peter II moved the Russian court from St. Petersburg to Moscow in 1729, so Goldbach followed him to Moscow. Goldbach started a correspondence with Euler in 1729, in which some of Goldbach's most important mathematics contributions can be found. Upon Peter II's death in 1730, Goldbach stopped teaching but continued to assist Empress Anna. In 1732, Goldbach returned to the St. Petersburg Academy of Sciences and stayed in the Russian government when Anna moved the court back to St. Petersburg. Upon return to the academy, Goldbach was named corresponding secretary. With Goldbach's return, his friend Euler continued his teaching and research at the academy as well. Then, in 1737, Goldbach and J.D. Schumacher took over the administration of the academy. Also, Goldbach took on duty in Russian court under Empress Anna. He managed to retain his influence in court after the death of Anna and the rule of Empress Elizabeth. In 1742 he entered the Russian Ministry of Foreign Affairs, stepping away from the academy once more. Goldbach was gifted land and increased salary for his good work and rise in the Russian government. In 1760, Goldbach created new guidelines for the education of the royal children which would remain in place for 100 years. He died on 20 November 1764, aged 74, in Moscow.
Christian Goldbach was multilingual – he wrote a diary in German and Latin, his letters were written in German, Latin, French, and Italian and for official documents he used Russian, German and Latin.
Contributions.
Goldbach is most noted for his correspondence with Leibniz, Euler, and Bernoulli, especially in his 1742 letter to Euler stating his Goldbach's conjecture. He also studied and proved some theorems on perfect powers, such as the Goldbach–Euler theorem, and made several notable contributions to analysis. He also proved a result concerning Fermat numbers that is called Goldbach's theorem.
Impact on Euler.
It is Goldbach and Euler's correspondence that contains some of Goldbach's most important contributions to mathematics, specifically number theory. Goldbach and Euler's friendship survived Goldbach's move to Moscow in 1728 and communication ensued. Their correspondence spanned 196 letters over 35 years written in Latin, German, and French. These letters spanned a wide range of topics, including various mathematics topics. Goldbach was the leading influence on Euler's interest and work in number theory. Most of the letters discuss Euler's research in number theory as well as differential calculus. Until the late 1750s, Euler's correspondence on his number theory research was almost exclusively with Goldbach.
Goldbach's earlier mathematical work and ideas in letters to Euler directly influenced some of Euler's work. In 1729, Euler solved two problems pertaining to sequences which had stumped Goldbach. Ensuingly, Euler outlined the solutions to Goldbach. Also, in 1729 Goldbach closely approximated the Basel problem, which prompted Euler's interest and concurring breakthrough solution. Goldbach, through his letters, kept Euler focused on number theory in the 1730s by discussing Fermat's conjecture with Euler. Euler subsequently offered a proof to the conjecture, crediting Goldbach with introducing him to the subfield. Euler proceeded to write 560 writings, published posthumously in four volumes of Opera omnia, with Goldbach's influence guiding some of the writings. Goldbach's famous conjecture and his writings with Euler prove him to be one of a handful of mathematicians who understood complex number theory in light of Fermat's revolutionary ideas on the topic.
|
6291
|
7903804
|
https://en.wikipedia.org/wiki?curid=6291
|
Roman censor
|
The censor was a magistrate in ancient Rome who was responsible for maintaining the census, supervising public morality, and overseeing certain aspects of the government's finances.
Established under the Roman Republic, power of the censor was limited in subject matter but absolute within his sphere: in matters reserved for the censors, no magistrate could oppose his decisions, and only another censor who succeeded him could cancel those decisions. Censors were also given unusually long terms of office; unlike other elected offices of the Republic, which (excluding certain priests elected for life) had terms of 12 months or less, censors' terms were generally 18 months to 5 years (depending on the era). The censorate was thus highly prestigious, preceding all other regular magistracies in dignity if not in power and reserved with rare exceptions for former consuls. Attaining the censorship would thus be considered the crowning achievement of a Roman politician on the "cursus honorum". However, the magistracy as a regular office did not survive the transition from the Republic to the Empire.
The censor's regulation of public morality is the origin of the modern meaning of the words "censor" and "censorship".
Early history of the magistracy.
According to Livy's "History of Rome", the "census" was first instituted by Servius Tullius, sixth king of Rome, BC. After the abolition of the monarchy and the founding of the Republic in 509 BC, the consuls had responsibility for the census until 443 BC. In 442 BC, no consuls were elected, but tribunes with consular power were appointed instead. This was a move by the plebeians to try to attain higher magistracies: only patricians could be elected consuls, while some military tribunes were plebeians. To prevent the possibility of plebeians obtaining control of the census, the patricians removed the right to take the census from the consuls and tribunes, and appointed for this duty two magistrates, called "censores" (censors), elected exclusively from the patricians in Rome.
The magistracy continued to be controlled by patricians until 351 BC, when Gaius Marcius Rutilus was appointed the first plebeian censor. Twelve years later, in 339 BC, one of the Publilian laws required that one censor had to be a plebeian. Despite this, no plebeian censor performed the solemn purification of the people (the "lustrum"; Livy "Periochae" 13) until 280 BC. In 131 BC, for the first time, both censors were plebeians.
The reason for having two censors was that the two consuls had previously taken the census together. If one of the censors died during his term of office, another was chosen to replace him, just as with consuls. This happened only once, in 393 BC. However, the Gauls captured Rome in that "lustrum" (five-year period), and the Romans thereafter regarded such replacement as "an offense against religion". From then on, if one of the censors died, his colleague resigned, and two new censors were chosen to replace them.
The office of censor was limited to eighteen months by a law proposed by the dictator Mamercus Aemilius Mamercinus. During the censorship of Appius Claudius Caecus (312–308 BC) the prestige of the censorship massively increased. Caecus built the first-ever Roman road (the Via Appia) and the first Roman aqueduct (the Aqua Appia), both named after him. He changed the organisation of the Roman tribes and was the first censor to draw the list of senators. He also advocated the founding of Roman "coloniae" throughout Latium and Campania to support the Roman war effort in the Second Samnite War. With these efforts and reforms, Appius Claudius Caecus was able to hold the censorship for a whole "lustrum" (five-year period), and the office of censor, subsequently entrusted with various important duties, eventually attained one of the highest political statuses in the Roman Republic, second only to that of the consuls.
Election.
The censors were elected in the Centuriate Assembly, which met under the presidency of a consul. Barthold Niebuhr suggests that the censors were at first elected by the Curiate Assembly, and that the Assembly's selections were confirmed by the Centuriate, but William Smith believes that "there is no authority for this supposition, and the truth of it depends entirely upon the correctness of [Niebuhr's] views respecting the election of the consuls". Both censors had to be elected on the same day, and accordingly if the voting for the second was not finished in the same day, the election of the first was invalidated, and a new assembly had to be held.
The assembly for the election of the censors was held under different auspices from those at the election of the consuls and praetors, so the censors were not regarded as their colleagues, although they likewise possessed the "maxima auspicia". The assembly was held by the new consuls shortly after they began their term of office; and the censors, as soon as they were elected and the censorial power had been granted to them by a decree of the Centuriate Assembly ("lex centuriata"), were fully installed in their office.
As a general principle, the only ones eligible for the office of censor were those who had previously been consuls, but there were a few exceptions. At first, there was no law to prevent a person being censor twice, but the only person who was elected to the office twice was Gaius Marcius Rutilus in 265 BC. In that year, he originated a law stating that no one could be elected censor twice. In consequence of this, he received the "cognomen" of Censorinus.
Attributes.
The censorship differed from all other Roman magistracies in the length of office. The censors were originally chosen for a whole "lustrum" (a period of five years), but as early as ten years after its institution (433 BC) their office was limited to eighteen months by a law of Dictator Mamercus Aemilius Mamercinus. The censors were also unique with respect to rank and dignity. They had no "imperium", and accordingly no lictors. Their rank was granted to them by the Centuriate Assembly, and not by the "curiae", and in that respect they were inferior in power to the consuls and praetors.
Notwithstanding this, the censorship was regarded as the highest dignity in the state, with the exception of the dictatorship; it was a "sacred magistracy" ("sanctus magistratus"), to which the deepest reverence was due. The high rank and dignity which the censorship obtained was due to the various important duties gradually entrusted to it, and especially to its possessing the "regimen morum", or general control over the conduct and the morals of the citizens. In the exercise of this power, they were regulated solely by their own views of duty, and were not responsible to any other power in the state.
The censors possessed the official stool called a "curule chair" ("sella curulis"), but some doubt exists with respect to their official dress. A well-known passage of Polybius describes the use of the "imagines" at funerals; we may conclude that a consul or praetor wore the purple-bordered "toga praetexta", one who triumphed the embroidered "toga picta", and the censor a purple toga peculiar to him, but other writers speak of their official dress as being the same as that of the other higher magistrates. The funeral of a censor was always conducted with great pomp and splendour, and hence a "censorial funeral" ("funus censorium") was voted even to the emperors.
Abolition.
The censorship continued in existence for 421 years, from 443 BC to 22 BC, but during this period, many "lustra" passed by without any censor being chosen at all. According to one statement, the office was abolished by Lucius Cornelius Sulla. Although the authority on which this statement rests is not of much weight, the fact itself is probable, since there was no census during the two "lustra" which elapsed from Sulla's dictatorship to Gnaeus Pompeius Magnus (Pompey)'s first consulship (82–70 BC), and any strict "imposition of morals" would have been found inconvenient to the aristocracy that supported Sulla.
If the censorship had been done away with by Sulla, it was at any rate restored in the consulship of Pompey and Marcus Licinius Crassus. Its power was limited by one of the laws of the tribune Publius Clodius Pulcher (58 BC), which prescribed certain regular forms of proceeding before the censors in expelling a person from the Roman Senate, and required that the censors be in agreement to exact this punishment. This law, however, was repealed in the third consulship of Pompey in 52 BC, on the urging of his colleague Q. Caecilius Metellus Scipio, but the office of the censorship never recovered its former power and influence.
During the civil wars which followed soon afterwards, no censors were elected; it was only after a long interval that they were again appointed, namely in 23 BC, when Augustus caused Lucius Munatius Plancus and Aemilius Lepidus Paullus to fill the office. This was the last time that such magistrates were appointed; the emperors in future discharged the duties of their office under the name of Praefectura Morum ("prefect of the morals").
Some of the emperors sometimes took the name of censor when they held a census of the Roman people; this was the case with Claudius, who appointed the elder Lucius Vitellius as his colleague, and with Vespasian, who likewise had a colleague in his son Titus. Domitian assumed the title of "perpetual censor" ("censor perpetuus"), but this example was not imitated by succeeding emperors. In the reign of Decius, the elder Valerian was nominated to the censorship, but declined the position.
Duties.
The duties of the censors may be divided into three classes, all of which were closely connected with one another:
The original business of the censorship was at first of a much more limited kind, and was restricted almost entirely to taking the census, but the possession of this power gradually brought with it fresh power and new duties, as is shown below. A general view of these duties is briefly expressed in the following passage of Cicero: "Censores populi aevitates, soboles, familias pecuniasque censento: urbis templa, vias, aquas, aerarium, vectigalia tuento: populique partes in tribus distribunto: exin pecunias, aevitates, ordines patiunto: equitum, peditumque prolem describunto: caelibes esse prohibento: mores populi regunto: probrum in senatu ne relinquunto." This can be translated as: "The Censors are to determine the generations, origins, families, and properties of the people; they are to (watch over/protect) the city's temples, roads, waters, treasury, and taxes; they are to divide the people into three parts; next, they are to (allow/approve) the properties, generations, and ranks [of the people]; they are to describe the offspring of knights and footsoldiers; they are to forbid being unmarried; they are to guide the behavior of the people; they are not to overlook abuse in the Senate."
Census.
The Census, the first and principal duty of the censors, was always held in the Campus Martius, and from the year 435 BC onwards, in a special building called Villa publica, which was erected for that purpose by the second pair of censors, Gaius Furius Pacilus Fusus and Marcus Geganius Macerinus. An account of the formalities with which the census was opened is given in a fragment of the "Tabulae Censoriae", preserved by Varro. After the auspices had been taken, the citizens were summoned by a public crier to appear before the censors. Each tribe was called up separately, and the names in each tribe were probably taken according to the lists previously made out by the tribunes of the tribes. Every "pater familias" had to appear in person before the censors, who were seated in their curule chairs, and those names were taken first which were considered to be of good omen, such as Valerius, Salvius, Statorius, etc.
The Census was conducted according to the judgement of the censor ("ad arbitrium censoris"), but the censors laid down certain rules, sometimes called "leges censui censendo", in which mention was made of the different kinds of property subject to the census, and in what way their value was to be estimated. According to these laws, each citizen had to give an account of himself, of his family, and of his property upon oath, "declared from the heart". First he had to give his full name ("praenomen", "nomen", and "cognomen") and that of his father, or if he were a "libertus" ("freedman") that of his patron, and he was likewise obliged to state his age. He was then asked, "You, declaring from your heart, do you have a wife?" and if married he had to give the name of his wife, and likewise the number, names, and ages of his children, if any. Single women and orphans were represented by their guardians; their names were entered in separate lists, and they were not included in the sum total of heads.
After a citizen had stated his name, age, family, etc., he then had to give an account of all his property, so far as it was subject to the census. Only such things were liable to the census ("censui censendo") as were property according to the Quiritary law. At first, each citizen appears to have merely given the value of his whole property in general without entering into details; but it soon became the practice to give a minute specification of each article, as well as the general value of the whole. Land formed the most important article of the census, but public land, the possession of which only belonged to a citizen, was excluded as not being Quiritarian property. Judging from the practice of the imperial period, it was the custom to give a most minute specification of all such land as a citizen held according to the Quiritarian law. He had to state the name and location of the land, and to specify what portion of it was arable, what meadow, what vineyard, and what olive-ground: and of the land thus described, he had to give his assessment of its value.
Slaves and cattle formed the next most important item. The censors also possessed the right of calling for a return of such objects as had not usually been given in, such as clothing, jewels, and carriages. It has been doubted by some modern writers whether the censors possessed the power of setting a higher valuation on the property than the citizens themselves gave, but given the discretionary nature of the censors' powers, and the necessity almost that existed, in order to prevent fraud, that the right of making a surcharge should be vested in somebody's hands, it is likely that the censors had this power. It is moreover expressly stated that on one occasion they made an extravagant surcharge on articles of luxury; and even if they did not enter in their books the property of a person at a higher value than he returned it, they accomplished the same end by compelling him to pay a tax upon the property at a higher rate than others. The tax was usually one per thousand upon the property entered in the books of the censors, but on one occasion the censors compelled a person to pay eight per thousand as a punishment.
A person who voluntarily absented himself from the census was considered "incensus" and subject to the severest punishment. Servius Tullius is said to have threatened such individuals with imprisonment and death, and in the Republican period he might be sold by the state as a slave. In the later period of the Republic, a person who was absent from the census might be represented by another, and be thus registered by the censors. Whether the soldiers who were absent on service had to appoint a representative is uncertain. In ancient times, the sudden outbreaks of war prevented the census from being taken, because a large number of the citizens would necessarily be absent. It is supposed from a passage in Livy that in later times the censors sent commissioners into the provinces with full powers to take the census of the Roman soldiers there, but this seems to have been a special case. It is, on the contrary, probable from the way in which Cicero pleads the absence of Archias from Rome with the army under Lucullus, as a sufficient reason for his not having been enrolled in the census, that service in the army was a valid excuse for absence.
After the censors had received the names of all the citizens with the amount of their property, they then had to make out the lists of the tribes, and also of the classes and centuries; for by the legislation of Servius Tullius the position of each citizen in the state was determined by the amount of his property (Comitia Centuriata). These lists formed a most important part of the "Tabulae Censoriae", under which name were included all the documents connected in any way with the discharge of the censors' duties. These lists, insofar as they were connected with the finances of the state, were deposited in the "aerarium", located in the Temple of Saturn; but the regular depository for all the archives of the censors was in earlier times the Atrium Libertatis, near the Villa publica, and in later times the temple of the Nymphs.
In addition to the division of the citizens into tribes, centuries, and classes, the censors had the power to confirm or revise the list of senators, striking out the names of such as they considered unworthy, and making additions to the body from those who were qualified. In the same manner, they held a review of the "equites" who received a horse from public funds ("equites equo publico"), and added and removed names as they judged proper. They also confirmed the "princeps senatus", or appointed a new one. The princeps himself had to be a former censor. After the lists had been completed, the number of citizens was counted up, and the sum total announced. Accordingly, we find that in the account of a census, the number of citizens is likewise usually given. They are in such cases spoken of as "capita" ("heads"), sometimes with the addition of the word "civium" ("of the citizens"), and sometimes not. Hence, to be registered in the census was the same thing as "having a head" ("caput habere").
Census beyond Rome.
A census was sometimes taken in the provinces, even under the Republic. The emperor sent into the provinces special officers called "censitores" to take the census; but the duty was sometimes discharged by the Imperial "legati". The "censitores" were assisted by subordinate officers, called "censuales", who made out the lists, etc. In Rome, the census was still taken under the Empire, but the old ceremonies connected with it were no longer performed, and the ceremony of the "lustratio" was not performed after the time of Vespasian. The jurists Paulus and Ulpian each wrote works on the census in the imperial period; and several extracts from these works are given in a chapter in the "Digest" (50 15).
Other uses of census.
The word "census", besides the conventional meaning of "valuation" of a person's estate, has other meaning in Rome; it could refer to:
"Regimen morum".
Keeping the public morals ("regimen morum", or in the Empire "cura morum" or "praefectura morum") was the second most important branch of the censors' duties, and the one which caused their office to be one of the most revered and the most dreaded; hence they were also known as "castigatores" ("chastisers"). It naturally grew out of the right which they possessed of excluding persons from the lists of citizens; for, as has been well remarked, "they would, in the first place, be the sole judges of many questions of fact, such as whether a citizen had the qualifications required by law or custom for the rank which he claimed, or whether he had ever incurred any judicial sentence, which rendered him infamous: but from thence the transition was easy, according to Roman notions, to the decisions of questions of right; such as whether a citizen was really worthy of retaining his rank, whether he had not committed some act as justly degrading as those which incurred the sentence of the law."
In this manner, the censors gradually assumed at least nominal complete superintendence over the whole public and private life of every citizen. They were constituted as the conservators of public morality; they were not simply to prevent crime or particular acts of immorality, but rather to maintain the traditional Roman character, ethics, and habits ("mos majorum")—"regimen morum" also encompassed this protection of traditional ways, which was called in the times of the Empire "cura" ("supervision") or "praefectura" ("command"). The punishment inflicted by the censors in the exercise of this branch of their duties was called "nota" ("mark, letter") or "notatio", or "animadversio censoria" ("censorial reproach"). In inflicting it, they were guided only by their conscientious convictions of duty; they had to take an oath that they would act biased by neither partiality nor favour; and, in addition to this, they were bound in every case to state in their lists, opposite the name of the guilty citizen, the cause of the punishment inflicted on him, "subscriptio censoria".
This part of the censors' office invested them with a peculiar kind of jurisdiction, which in many respects resembled the exercise of public opinion in modern times; for there are innumerable actions which, though acknowledged by everyone to be prejudicial and immoral, still do not come within the reach of the positive laws of a country; as often said, "immorality does not equal illegality". Even in cases of real crimes, the positive laws frequently punish only the particular offence, while in public opinion the offender, even after he has undergone punishment, is still incapacitated for certain honours and distinctions which are granted only to persons of unblemished character.
Hence, the Roman censors might brand a man with their "censorial mark" ("nota censoria") in case he had been convicted of a crime in an ordinary court of justice, and had already suffered punishment for it. The consequence of such a "nota" was only "ignominia" and not "infamia". "Infamia" and the censorial verdict was not a "judicium" or "res judicata", for its effects were not lasting, but might be removed by the following censors, or by a "lex" (roughly "law"). A censorial mark was moreover not valid unless both censors agreed. The "ignominia" was thus only a transitory reduction of status, which does not even appear to have deprived a magistrate of his office, and certainly did not disqualify persons labouring under it for obtaining a magistracy, for being appointed as "judices" by the praetor, or for serving in the Roman army. Mamercus Aemilius Mamercinus was thus, notwithstanding the reproach of the censors ("animadversio censoria"), made dictator.
A person might be branded with a censorial mark in a variety of cases, which it would be impossible to specify, as in a great many instances it depended upon the discretion of the censors and the view they took of a case; and sometimes even one set of censors would overlook an offence which was severely chastised by their successors. But the offences which are recorded to have been punished by the censors are of a threefold nature.
A person who had been branded with a "nota censoria", might, if he considered himself wronged, endeavour to prove his innocence to the censors, and if he did not succeed, he might try to gain the protection of one of the censors, that he might intercede on his behalf.
Punishments.
The punishments inflicted by the censors generally differed according to the station which a man occupied, though sometimes a person of the highest rank might suffer all the punishments at once, by being degraded to the lowest class of citizens. The punishments are generally divided into four classes:
It was this authority of the Roman censors which eventually developed into the modern meaning of "censor" and "censorship"—i.e., officials who review published material and forbid the publication of material judged to be contrary to "public morality" as the term is interpreted in a given political and social environment.
Administration of the finances of the state.
The administration of the state's finances was another part of the censors' office. In the first place the "tributum", or property-tax, had to be paid by each citizen according to the amount of his property registered in the census, and, accordingly, the regulation of this tax naturally fell under the jurisdiction of the censors. They also had the superintendence of all the other revenues of the state, the "vectigalia", such as the tithes paid for the public lands, the salt works, the mines, the customs, etc.
The censors typically auctioned off to the highest bidder for the space of a "lustrum" the collection of the tithes and taxes (tax farming). This auctioning was called "venditio" or "locatio", and seems to have taken place in the month of March, in a public place in Rome The terms on which they were let, together with the rights and duties of the purchasers, were all specified in the "leges censoriae", which the censors published in every case before the bidding commenced. For further particulars see Publicani.
The censors also possessed the right, though probably not without the assent of the Senate, of imposing new "vectigalia", and even of selling the land belonging to the state. It would thus appear that it was the duty of the censors to bring forward a budget for a five-year period, and to take care that the income of the state was sufficient for its expenditure during that time. In part, their duties resembled those of a modern minister of finance. The censors, however, did not receive the revenues of the state. All the public money was paid into the "aerarium", which was entirely under the jurisdiction of the Senate; and all disbursements were made by order of this body, which employed the quaestors as its officers.
Overseeing public works.
In one important department, the public works, the censors were entrusted with the expenditure of the public money (though the actual payments were no doubt made by the quaestors).
The censors had the general superintendence of all the public buildings and works ("opera publica"), and to meet the expenses connected with this part of their duties, the Senate voted them a certain sum of money or certain revenues, to which they were restricted, but which they might at the same time employ according to their discretion. They had to see that the temples and all other public buildings were in a good state of repair, that no public places were encroached upon by the occupation of private persons, and that the aqueducts, roads, drains, etc. were properly attended to.
The repairs of the public works and the keeping of them in proper condition were let out by the censors by public auction to the lowest bidder, just as the "vectigalia" were let out to the highest bidder. These expenses were called "ultrotributa", and hence we frequently find "vectigalia" and "ultrotributa" contrasted with one another. The persons who undertook the contract were called "conductores", "mancipes", "redemptores", "susceptores", etc., and the duties they had to discharge were specified in the Leges Censoriae. The censors had also to superintend the expenses connected with the worship of the gods, even for instance the feeding of the sacred geese in the Capitol; these various tasks were also let out on contract. It was ordinary for censors to expend large amounts of money (“by far the largest and most extensive” of the state) in their public works.
Besides keeping existing public buildings and facilities in a proper state of repair, the censors were also in charge of constructing new ones, either for ornament or utility, both in Rome and in other parts of Italy, such as temples, basilicae, theatres, porticoes, fora, aqueducts, town walls, harbours, bridges, cloacae, roads, etc. These works were either performed by them jointly, or they divided between them the money, which had been granted to them by the Senate. They were let out to contractors, like the other works mentioned above, and when they were completed, the censors had to see that the work was performed in accordance with the contract: this was called "opus probare" or "in acceptum referre".
The first ever Roman road, the Via Appia, and the first Roman aqueduct, the Aqua Appia, were all constructed under the censorship of Appius Claudius Caecus, one of the most influential censors.
The aediles had likewise a superintendence over the public buildings, and it is not easy to define with accuracy the respective duties of the censors and aediles, but it may be remarked in general that the superintendence of the aediles had more of a police character, while that of the censors were more financial in subject matter.
Lustrum.
After the censors had performed their various duties and taken the five-yearly census, the "lustrum", a solemn purification of the people, followed. When the censors entered upon their office, they drew lots to see which of them should perform this purification; but both censors were of course obliged to be present at the ceremony.
Long after the Roman census was no longer taken, the Latin word "lustrum" has survived, and been adopted in some modern languages, in the derived sense of a period of five years, i.e., half a decennium.
|
6292
|
2051880
|
https://en.wikipedia.org/wiki?curid=6292
|
Convex set
|
In geometry, a set of points is convex if it contains every line segment between two points in the set.
For example, a solid cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex.
The boundary of a convex set in the plane is always a convex curve. The intersection of all the convex sets that contain a given subset of Euclidean space is called the convex hull of . It is the smallest convex set containing .
A convex function is a real-valued function defined on an interval with the property that its epigraph (the set of points on or above the graph of the function) is a convex set. Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis.
Spaces in which convex sets are defined include the Euclidean spaces, the affine spaces over the real numbers, and certain non-Euclidean geometries.
Definitions.
Let be a vector space or an affine space over the real numbers, or, more generally, over some ordered field (this includes Euclidean spaces, which are affine spaces). A subset of is convex if, for all and in , the line segment connecting and is included in .
This means that the affine combination belongs to for all in and in the interval . This implies that convexity is invariant under affine transformations. Further, it implies that a convex set in a real or complex topological vector space is path-connected (and therefore also connected).
A set is if every point on the line segment connecting and other than the endpoints is inside the topological interior of . A closed convex subset is strictly convex if and only if every one of its boundary points is an extreme point.
A set is absolutely convex if it is convex and balanced.
Examples.
The convex subsets of (the set of real numbers) are the intervals and the points of . Some examples of convex subsets of the Euclidean plane are solid regular polygons, solid triangles, and intersections of solid triangles. Some examples of convex subsets of a Euclidean 3-dimensional space are the Archimedean solids and the Platonic solids. The Kepler-Poinsot polyhedra are examples of non-convex sets.
Non-convex set.
A set that is not convex is called a "non-convex set". A polygon that is not a convex polygon is sometimes called a concave polygon, and some sources more generally use the term "concave set" to mean a non-convex set, but most authorities prohibit this usage.
The complement of a convex set, such as the epigraph of a concave function, is sometimes called a "reverse convex set", especially in the context of mathematical optimization.
Properties.
Given points in a convex set , and
nonnegative numbers such that , the affine combination
formula_1
belongs to . As the definition of a convex set is the case , this property characterizes convex sets.
Such an affine combination is called a convex combination of . The convex hull of a subset of a real vector space is defined as the intersection of all convex sets that contain . More concretely, the convex hull is the set of all convex combinations of points in . In particular, this is a convex set.
A "(bounded) convex polytope" is the convex hull of a finite subset of some Euclidean space .
Intersections and unions.
The collection of convex subsets of a vector space, an affine space, or a Euclidean space has the following properties:
Closed convex sets.
Closed convex sets are convex sets that contain all their limit points. They can be characterised as the intersections of "closed half-spaces" (sets of points in space that lie on and to one side of a hyperplane).
From what has just been said, it is clear that such intersections are convex, and they will also be closed sets. To prove the converse, i.e., every closed convex set may be represented as such intersection, one needs the supporting hyperplane theorem in the form that for a given closed convex set and point outside it, there is a closed half-space that contains and not . The supporting hyperplane theorem is a special case of the Hahn–Banach theorem of functional analysis.
Face of a convex set.
A face of a convex set formula_2 is a convex subset formula_3 of formula_2 such that whenever a point formula_5 in formula_3 lies strictly between two points formula_7 and formula_8 in formula_2, both formula_7 and formula_8 must be in formula_3. Equivalently, for any formula_13 and any real number formula_14 such that formula_15 is in formula_3, formula_7 and formula_8 must be in formula_3. According to this definition, formula_2 itself and the empty set are faces of formula_2; these are sometimes called the "trivial faces" of formula_2. An extreme point of formula_2 is a point that is a face of formula_2.
Let formula_2 be a convex set in formula_26 that is compact (or equivalently, closed and bounded). Then formula_2 is the convex hull of its extreme points. More generally, each compact convex set in a locally convex topological vector space is the closed convex hull of its extreme points (the Krein–Milman theorem).
For example:
Convex sets and rectangles.
Let be a convex body in the plane (a convex set whose interior is non-empty). We can inscribe a rectangle "r" in such that a homothetic copy "R" of "r" is circumscribed about . The positive homothety ratio is at most 2 and:
formula_30
Blaschke-Santaló diagrams.
The set formula_31 of all planar convex bodies can be parameterized in terms of the convex body diameter "D", its inradius "r" (the biggest circle contained in the convex body) and its circumradius "R" (the smallest circle containing the convex body). In fact, this set can be described by the set of inequalities given by
formula_32
formula_33
formula_34
formula_35
and can be visualized as the image of the function "g" that maps a convex body to the point given by ("r"/"R", "D"/2"R"). The image of this function is known a ("r", "D", "R") Blachke-Santaló diagram.
Alternatively, the set formula_31 can also be parametrized by its width (the smallest distance between any two different parallel support hyperplanes), perimeter and area.
Other properties.
Let "X" be a topological vector space and formula_37 be convex.
Convex hulls and Minkowski sums.
Convex hulls.
Every subset of the vector space is contained within a smallest convex set (called the "convex hull" of ), namely the intersection of all convex sets containing . The convex-hull operator Conv() has the characteristic properties of a closure operator:
The convex-hull operation is needed for the set of convex sets to form a lattice, in which the "join" operation is the convex hull of the union of two convex sets
formula_48
The intersection of any collection of convex sets is itself convex, so the convex subsets of a (real or complex) vector space form a complete lattice.
Minkowski addition.
In a real vector-space, the "Minkowski sum" of two (non-empty) sets, and , is defined to be the set formed by the addition of vectors element-wise from the summand-sets
formula_49
More generally, the "Minkowski sum" of a finite family of (non-empty) sets is the set formed by element-wise addition of vectors
formula_50
For Minkowski addition, the "zero set" containing only the zero vector has special importance: For every non-empty subset S of a vector space
formula_51
in algebraic terminology, is the identity element of Minkowski addition (on the collection of non-empty sets).
Convex hulls of Minkowski sums.
Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition:
Let be subsets of a real vector-space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls
formula_52
This result holds more generally for each finite collection of non-empty sets:
formula_53
In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations.
Minkowski sums of convex sets.
The Minkowski sum of two compact convex sets is compact. The sum of a compact convex set and a closed convex set is closed.
The following famous theorem, proved by Dieudonné in 1966, gives a sufficient condition for the difference of two closed convex subsets to be closed. It uses the concept of a recession cone of a non-empty convex subset "S", defined as:
formula_54
where this set is a convex cone containing formula_55 and satisfying formula_56. Note that if "S" is closed and convex then formula_57 is closed and for all formula_58,
formula_59
Theorem (Dieudonné). Let "A" and "B" be non-empty, closed, and convex subsets of a locally convex topological vector space such that formula_60 is a linear subspace. If "A" or "B" is locally compact then "A" − "B" is closed.
Generalizations and extensions for convexity.
The notion of convexity in the Euclidean space may be generalized by modifying the definition in some or other aspects. The common name "generalized convexity" is used, because the resulting objects retain certain properties of convex sets.
Star-convex (star-shaped) sets.
Let be a set in a real or complex vector space. is star convex (star-shaped) if there exists an in such that the line segment from to any point in is contained in . Hence a non-empty convex set is always star-convex but a star-convex set is not always convex.
Orthogonal convexity.
An example of generalized convexity is orthogonal convexity.
A set in the Euclidean space is called orthogonally convex or ortho-convex, if any segment parallel to any of the coordinate axes connecting two points of lies totally within . It is easy to prove that an intersection of any collection of orthoconvex sets is orthoconvex. Some other properties of convex sets are valid as well.
Non-Euclidean geometry.
The definition of a convex set and a convex hull extends naturally to geometries which are not Euclidean by defining a geodesically convex set to be one that contains the geodesics joining any two points in the set.
Order topology.
Convexity can be extended for a totally ordered set endowed with the order topology.
Let . The subspace is a convex set if for each pair of points in such that , the interval is contained in . That is, is convex if and only if for all in , implies .
A convex set is connected in general: a counter-example is given by the subspace {1,2,3} in , which is both convex and not connected.
Convexity spaces.
The notion of convexity may be generalised to other objects, if certain properties of convexity are selected as axioms.
Given a set , a convexity over is a collection of subsets of satisfying the following axioms:
The elements of are called convex sets and the pair is called a convexity space. For the ordinary convexity, the first two axioms hold, and the third one is trivial.
For an alternative definition of abstract convexity, more suited to discrete geometry, see the "convex geometries" associated with antimatroids.
Convex spaces.
Convexity can be generalised as an abstract algebraic structure: a space is convex if it is possible to take convex combinations of points.
|
6293
|
46051904
|
https://en.wikipedia.org/wiki?curid=6293
|
Cairo
|
Cairo ( ; , ) is the capital and largest city of Egypt and the Cairo Governorate, being home to more than 10 million people. It is also part of the largest urban agglomeration in Africa, the Arab world, and the Middle East. The Greater Cairo metropolitan area is one of the largest in the world by population with over 22.1 million people.
The area that would become Cairo was part of ancient Egypt, as the Giza pyramid complex and the ancient cities of Memphis and Heliopolis are near-by. Located near the Nile Delta, the predecessor settlement was Fustat following the Muslim conquest of Egypt in 641 next to an existing ancient Roman fortress, Babylon. Subsequently, Cairo was founded by the Fatimid dynasty in 969. It later superseded Fustat as the main urban centre during the Ayyubid and Mamluk periods (12th–16th centuries).
Cairo has since become a longstanding centre of political and cultural life, and is titled "the city of a thousand minarets" for its preponderance of Islamic architecture. Cairo's historic center was awarded World Heritage Site status in 1979. Cairo is considered a World City with a "Beta +" classification according to GaWC.
Cairo has the oldest and largest film and music industry in the Arab world, as well as Egypt's oldest institution of higher learning, Al-Azhar University. Many international media, businesses, and organizations have regional headquarters in the city; the Arab League has had its headquarters in Cairo for most of its existence.
Cairo, like many other megacities, suffers from high levels of pollution and traffic. The Cairo Metro, opened in 1987, is the oldest metro system in Africa, and ranks amongst the fifteen busiest in the world, with over 1 billion annual passenger rides. The economy of Cairo was ranked first in the Middle East in 2005, and 43rd globally on "Foreign Policy" 2010 Global Cities Index.
Etymology.
The name of Cairo is derived from the Arabic ' (), meaning 'the Vanquisher' or 'the Conqueror', given by the Fatimid Caliph al-Mu'izz following the establishment of the city as the capital of the Fatimid dynasty. Its full, formal name was ' (), meaning 'the Vanquisher of al-Mu'izz'. It is also supposedly due to the fact that the planet Mars, known in Arabic by names such as (, 'the Conquering Star'), was rising at the time of the city's founding.
Egyptians often refer to Cairo as "" (; ), the Egyptian Arabic name for Egypt itself, emphasizing the city's importance for the country.
There are a number of Coptic names for the city. "Tikešrōmi" ( ) is attested in the 1211 text "The Martyrdom of John of Phanijoit" and is either a calque meaning 'man breaker' (, 'the', , 'to break', and , 'man'), akin to Arabic "", or a derivation from Arabic ("qaṣr ar-rūm", "the Roman castle"), another name of Babylon Fortress in Old Cairo. The Arabic name is also calqued as , "the victor city" in the Coptic antiphonary.
The form Khairon () is attested in the modern Coptic text Ⲡⲓⲫⲓⲣⲓ ⲛ̀ⲧⲉ ϯⲁⲅⲓⲁ ⲙ̀ⲙⲏⲓ Ⲃⲉⲣⲏⲛⲁ (The Tale of Saint Verina). ( ) or ( ) is another name which is descended from the Greek name of Heliopolis (). Some argue that ( ) or ( ) is another Coptic name for Cairo, although others think that it is rather a name for the Abbasid province capital al-Askar. () is a popular modern rendering of an Arabic name (others being [Kairon] and [Kahira]) which is modern folk etymology meaning 'land of sun'. Some argue that it was the name of an Egyptian settlement upon which Cairo was built, but it is rather doubtful as this name is not attested in any Hieroglyphic or Demotic source, although some researchers, like Paul Casanova, view it as a legitimate theory. Cairo is also referred to as ( ) or ( ), which means Egypt in Coptic, the same way it is referred to in Egyptian Arabic.
Sometimes the city is informally referred to as "" by people from Alexandria (; ).
History.
Ancient settlements.
The area around present-day Cairo had long been a focal point of Ancient Egypt due to its strategic location at the junction of the Nile Valley and the Nile Delta regions (roughly Upper Egypt and Lower Egypt), which also placed it at the crossing of major routes between North Africa and the Levant. Memphis, the capital of Egypt during the Old Kingdom and a major city up until the Ptolemaic period, was located a short distance south west of present-day Cairo. Heliopolis, another important city and major religious center, was located in what are now the modern districts of Matariya and Ain Shams in northeastern Cairo. It was largely destroyed by the Persian invasions in 525 BC and 343 BC and partly abandoned by the late first century BC.
However, the origins of modern Cairo are generally traced back to a series of settlements in the first millennium AD. Around the turn of the fourth century, as Memphis was continuing to decline in importance, the Romans established a large fortress along the east bank of the Nile. The fortress, called Babylon, was built by the Roman emperor Diocletian (r. 285–305) at the entrance of a canal connecting the Nile to the Red Sea that was created earlier by Emperor Trajan (r. 98–117). Further north of the fortress, near the present-day district of al-Azbakiya, was a port and fortified outpost known as Tendunyas () or Umm Dunayn. While no structures older than the 7th century have been preserved in the area aside from the Roman fortifications, historical evidence suggests that a sizeable city existed. The city was important enough that its bishop, Cyrus, participated in the Second Council of Ephesus in 449.
The Byzantine-Sassanian War between 602 and 628 caused great hardship and likely caused much of the urban population to leave for the countryside, leaving the settlement partly deserted. The site today remains at the nucleus of the Coptic Orthodox community, which separated from the Roman and Byzantine churches in the late 4th century. Cairo's oldest extant churches, such as the Church of Saint Barbara and the Church of Saints Sergius and Bacchus (from the late 7th or early 8th century), are located inside the fortress walls in what is now known as Old Cairo or Coptic Cairo.
Fustat and other early Islamic settlements.
The Muslim conquest of Byzantine Egypt was led by Amr ibn al-As from 639 to 642. Babylon Fortress was besieged in September 640 and fell in April 641. In 641 or early 642, after the surrender of Alexandria (the Egyptian capital at the time), he founded a new settlement next to Babylon Fortress. The city, known as Fustat (), served as a garrison town and as the new administrative capital of Egypt. Historians such as Janet Abu-Lughod and André Raymond trace the genesis of present-day Cairo to the foundation of Fustat. The choice of founding a new settlement at this inland location, instead of using the existing capital of Alexandria on the Mediterranean coast, may have been due to the new conquerors' strategic priorities. One of the first projects of the new Muslim administration was to clear and re-open Trajan's ancient canal in order to ship grain more directly from Egypt to Medina, the capital of the caliphate in Arabia. Ibn al-As also founded a mosque for the city at the same time, now known as the Mosque of Amr Ibn al-As, the oldest mosque in Egypt and Africa (although the current structure dates from later expansions).
In 750, following the overthrow of the Umayyad Caliphate by the Abbasids, the new rulers created their own settlement to the northeast of Fustat which became the new provincial capital. This was known as al-Askar () as it was laid out like a military camp. A governor's residence and a new mosque were also added, with the latter completed in 786. The Red Sea canal re-excavated in the 7th century was closed by the Abbasid Caliph al-Mansur (), but a part of the canal, known as the Khalij, continued to be a major feature of Cairo's geography and of its water supply until the 19th century. In 861, on the orders of the Abbasid Caliph al-Mutawakkil, a Nilometer was built on Roda Island near Fustat. Although it was repaired and given a new roof in later centuries, its basic structure is still preserved today, making it the oldest preserved Islamic-era structure in Cairo today.
In 868 a commander of Turkic origin named Bakbak was sent to Egypt by the Abbasid Caliph al-Mu'taz to restore order after a rebellion in the country. He was accompanied by his stepson, Ahmad ibn Tulun, who became effective governor of Egypt. Over time, Ibn Tulun gained an army and accumulated influence and wealth, allowing him to become the "de facto" independent ruler of both Egypt and Syria by 878. In 870, he used his growing wealth to found a new administrative capital, al-Qata'i (), to the northeast of Fustat and of al-Askar. The new city included a palace known as the "Dar al-Imara", a parade ground known as "al-Maydan", a bimaristan (hospital), and an aqueduct to supply water. Between 876 and 879 Ibn Tulun built a great mosque, now known as the Mosque of Ibn Tulun, at the center of the city, next to the palace. After his death in 884, Ibn Tulun was succeeded by his son and his descendants who continued a short-lived dynasty, the Tulunids. In 905, the Abbasids sent general Muhammad Sulayman al-Katib to re-assert direct control over the country. Tulunid rule was ended and al-Qatta'i was razed to the ground, except for the mosque which remains standing today.
Foundation and expansion of Cairo under the Fatimids.
In 969, the Fatimid Caliphate conquered Egypt after ruling from Ifriqiya. The Fatimid Caliph al-Mu'izz li-Din Allah instructed his courtier and general Jawhar al-Saqili to establish a new fortified city northeast of Fustat and of former al-Qata'i. It took four years to build the city, initially known as al-Manṣūriyyah, which was to serve as the new capital of the caliphate. During that time, the construction of the al-Azhar Mosque was commissioned by order of the caliph, which developed into the third-oldest university in the world. Cairo would eventually become a centre of learning, with the library of Cairo containing hundreds of thousands of books. When Caliph al-Mu'izz arrived from the old Fatimid capital of Mahdia in Tunisia in 973, he gave the city its present name, "Qāhirat al-Mu'izz" ("The Vanquisher of al-Mu'izz"), from which the name "Cairo" ("al-Qāhira") originates. The caliphs lived in a vast and lavish palace complex that occupied the heart of the city. Cairo remained a relatively exclusive royal city for most of this era, but during the tenure of Badr al-Gamali as vizier (1073–1094) the restrictions were loosened for the first time and richer families from Fustat were allowed to move into the city. Between 1087 and 1092 Badr al-Gamali also rebuilt the city walls in stone and constructed the city gates of Bab al-Futuh, Bab al-Nasr, and Bab Zuweila that still stand today.
During the Fatimid period Fustat reached its apogee in size and prosperity, acting as a center of craftsmanship and international trade and as the area's main port on the Nile. Historical sources report that multi-story communal residences existed in the city, particularly in its center, which were typically inhabited by middle and lower-class residents. Some of these were as high as seven stories and could house some 200 to 350 people. They may have been similar to Roman "insulae" and may have been the prototypes for the rental apartment complexes which became common in the later Mamluk and Ottoman periods. However, in 1168 the Fatimid vizier Shawar set fire to the unfortified Fustat to prevent its potential capture by Amalric, the Crusader king of Jerusalem. While the fire did not destroy the city and it continued to exist afterward, it did mark the beginning of its decline. Over the following centuries it was Cairo, the former palace-city, that became the new economic center and attracted migration from Fustat.
While the Crusaders did not capture the city in 1168, a continuing power struggle between Shawar, King Amalric, and the Zengid general Shirkuh led to the downfall of the Fatimid establishment. In 1169, Shirkuh's nephew Saladin was appointed as the new vizier of Egypt by the Fatimids and two years later he seized power from the family of the last Fatimid caliph, al-'Āḍid. As the first Sultan of Egypt, Saladin established the Ayyubid dynasty, based in Cairo, and aligned Egypt with the Sunni Abbasids, who were based in Baghdad. In 1176, Saladin began construction on the Cairo Citadel, which was to serve as the seat of the Egyptian government until the mid-19th century. The construction of the Citadel definitively ended Fatimid-built Cairo's status as an exclusive palace-city and opened it up to common Egyptians and to foreign merchants, spurring its commercial development. Along with the Citadel, Saladin also began the construction of a new 20-kilometre-long wall that would protect both Cairo and Fustat on their eastern side and connect them with the new Citadel. These construction projects continued beyond Saladin's lifetime and were completed under his Ayyubid successors.
Further expansion and decline under the Ayyubids and Mamluks.
In 1250, during the Seventh Crusade, the Ayyubid dynasty had a crisis with the death of al-Salih and power transitioned instead to the Mamluks, partly with the help of al-Salih's wife, Shajar ad-Durr, who ruled for a brief period around this time. Mamluks were soldiers who were purchased as young slaves and raised to serve in the sultan's army. Between 1250 and 1517 the throne of the Mamluk Sultanate passed from one mamluk to another in a system of succession that was generally non-hereditary, but also frequently violent and chaotic. The Mamluk Empire nonetheless became a major power in the region and was responsible for repelling the advance of the Mongols (most famously at the Battle of Ain Jalut in 1260) and for eliminating the last Crusader states in the Levant.
Despite their military character, the Mamluks were also prolific builders and left a rich architectural legacy throughout Cairo. Continuing a practice started by the Ayyubids, much of the land occupied by former Fatimid palaces was sold and replaced by newer buildings, becoming a prestigious site for the construction of Mamluk religious and funerary complexes. Construction projects initiated by the Mamluks pushed the city outward while also bringing new infrastructure to the centre of the city. Meanwhile, Cairo flourished as a centre of Islamic scholarship and a crossroads on the spice trade route among the civilisations in Afro-Eurasia. Under the reign of the Mamluk sultan al-Nasir Muhammad (1293–1341, with interregnums), Cairo reached its apogee in terms of population and wealth. By 1340, Cairo had a population of close to half a million, making it the largest city west of China.
Multi-story buildings occupied by rental apartments, known as a "rab'" (plural "ribā"' or "urbu"), became common in the Mamluk period and continued to be a feature of the city's housing during the later Ottoman period. These apartments were often laid out as multi-story duplexes or triplexes. They were sometimes attached to caravanserais, where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants. The oldest partially-preserved example of this type of structure is the Wikala of Amir Qawsun, built before 1341. Residential buildings were in turn organized into close-knit neighbourhoods called a "harat", which in many cases had gates that could be closed off at night or during disturbances.
When the traveller Ibn Battuta first came to Cairo in 1326, he described it as the principal district of Egypt. When he passed through the area again on his return journey in 1348, the Black Death was ravaging most major cities. He cited reports of thousands of deaths per day in Cairo. Although Cairo avoided Europe's stagnation during the Late Middle Ages, it could not escape the Black Death, which struck the city more than fifty times between 1348 and 1517. During its initial, and most deadly waves, approximately 200,000 people were killed by the plague, and, by the 15th century, Cairo's population had been reduced to between 150,000 and 300,000. The population decline was accompanied by a period of political instability between 1348 and 1412. It was nonetheless in this period that the largest Mamluk-era religious monument, the Madrasa-Mosque of Sultan Hasan, was built. In the late 14th century, the Burji Mamluks replaced the Bahri Mamluks as rulers of the Mamluk state, but the Mamluk system continued to decline.
Though the plagues returned frequently throughout the 15th century, Cairo remained a major metropolis and its population recovered in part through rural migration. More conscious efforts were conducted by rulers and city officials to redress the city's infrastructure and cleanliness. Its economy and politics also became more deeply connected with the wider Mediterranean. Some Mamluk sultans in this period, such as Barbsay (r. 1422–1438) and Qaytbay (r. 1468–1496), had relatively long and successful reigns. After al-Nasir Muhammad, Qaytbay was one of the most prolific patrons of art and architecture of the Mamluk era. He built or restored numerous monuments in Cairo, in addition to commissioning projects beyond Egypt. The crisis of Mamluk power and of Cairo's economic role deepened after Qaytbay. The city's status was diminished after Vasco da Gama discovered a sea route around the Cape of Good Hope between 1497 and 1499, thereby allowing spice traders to avoid Cairo.
Ottoman rule.
Cairo's political influence diminished significantly after the Ottomans defeated Sultan al-Ghuri in the Battle of Marj Dabiq in 1516 and conquered Egypt in 1517. Ruling from Constantinople, Sultan Selim I relegated Egypt to a province, with Cairo as its capital. For this reason, the history of Cairo during Ottoman times is often described as inconsequential, especially in comparison to other time periods.
During the 16th and 17th centuries, Cairo still remained an important economic and cultural centre. Although no longer on the spice route, the city facilitated the transportation of Yemeni coffee and Indian textiles, primarily to Anatolia, North Africa, and the Balkans. Cairene merchants were instrumental in bringing goods to the barren Hejaz, especially during the annual hajj to Mecca. It was during this same period that al-Azhar University reached the predominance among Islamic schools that it continues to hold today; pilgrims on their way to hajj often attested to the superiority of the institution, which had become associated with Egypt's body of Islamic scholars. The first printing press of the Middle East, printing in Hebrew, was established in Cairo by a scion of the Soncino family of printers, Italian Jews of Ashkenazi origin who operated a press in Constantinople. The existence of the press is known solely from two fragments discovered in the Cairo Geniza.
Under the Ottomans, Cairo expanded south and west from its nucleus around the Citadel. The city was the second-largest in the empire, behind Constantinople, and, although migration was not the primary source of Cairo's growth, twenty percent of its population at the end of the 18th century consisted of religious minorities and foreigners from around the Mediterranean. Still, when Napoleon arrived in Cairo in 1798, the city's population was less than 300,000, forty percent lower than it was at the height of Mamluk—and Cairene—influence in the mid-14th century.
The French occupation was short-lived as British and Ottoman forces, including a sizeable Albanian contingent, recaptured the country in 1801. Cairo itself was besieged by a British and Ottoman force culminating with the French surrender on 22 June 1801. The British vacated Egypt two years later, leaving the Ottomans, the Albanians, and the long-weakened Mamluks jostling for control of the country. Continued civil war allowed an Albanian named Muhammad Ali Pasha to ascend to the role of commander and eventually, with the approval of the religious establishment, viceroy of Egypt in 1805.
Modern era.
Until his death in 1848, Muhammad Ali Pasha instituted a number of social and economic reforms that earned him the title of founder of modern Egypt. However, while Muhammad Ali initiated the construction of public buildings in the city, those reforms had minimal effect on Cairo's landscape. Bigger changes came to Cairo under Isma'il Pasha (r. 1863–1879), who continued the modernisation processes started by his grandfather. Drawing inspiration from Paris, Isma'il envisioned a city of maidans and wide avenues; due to financial constraints, only some of them, in the area now composing Downtown Cairo, came to fruition. Isma'il also sought to modernize the city, which was merging with neighbouring settlements, by establishing a public works ministry, bringing gas and lighting to the city, and opening a theatre and opera house.
The immense debt resulting from Isma'il's projects provided a pretext for increasing European control, which culminated with the British invasion in 1882. The city's economic centre quickly moved west toward the Nile, away from the historic Islamic Cairo section and toward the contemporary, European-style areas built by Isma'il. Europeans accounted for five percent of Cairo's population at the end of the 19th century, by which point they held most top governmental positions.
In 1906, the Heliopolis Oasis Company headed by the Belgian industrialist Édouard Empain and his Egyptian counterpart Boghos Nubar, built a suburb called Heliopolis (city of the sun in Greek) ten kilometers from the center of Cairo. In 1905–1907 the northern part of the Gezira island was developed by the Baehler Company into Zamalek, which would later become Cairo's upscale "chic" neighbourhood. In 1906 construction began on Garden City, a neighbourhood of urban villas with gardens and curved streets.
The British occupation was intended to be temporary, but it lasted well into the 20th century. Nationalists staged large-scale demonstrations in Cairo in 1919, five years after Egypt had been declared a British protectorate. Nevertheless, this led to Egypt's independence in 1922.
The King Fuad I Edition of the Qur'an was first published on 10 July 1924 in Cairo under the patronage of King Fuad. The goal of the government of the newly formed Kingdom of Egypt was not to delegitimize the other variant Quranic texts ("qira'at"), but to eliminate errors found in Qur'anic texts used in state schools. A committee of teachers chose to preserve a single one of the canonical qira'at "readings", namely that of the "Ḥafṣ" version, an 8th-century Kufic recitation. This edition has become the standard for modern printings of the Quran for much of the Islamic world. The publication has been called a "terrific success", and the edition has been described as one "now widely seen as the official text of the Qur'an", so popular among both Sunni and Shi'a that the common belief among less well-informed Muslims is "that the Qur'an has a single, unambiguous reading". Minor amendments were made later in 1924 and in 1936 - the "Faruq edition" in honour of then ruler, King Faruq.
British occupation until 1956.
British troops remained in the country until 1956. During this time, urban Cairo, spurred by new bridges and transport links, continued to expand to include the upscale neighbourhoods of Garden City, Zamalek, and Heliopolis. Between 1882 and 1937, the population of Cairo more than tripled—from 347,000 to 1.3 million—and its area increased from .
The city was devastated during the 1952 riots known as the Cairo Fire or Black Saturday, which saw the destruction of nearly 700 shops, movie theatres, casinos and hotels in downtown Cairo. The British departed Cairo following the Egyptian Revolution of 1952, but the city's rapid growth showed no signs of abating. Seeking to accommodate the increasing population, President Gamal Abdel Nasser redeveloped Tahrir Square and the Nile Corniche, and improved the city's network of bridges and highways. Meanwhile, additional controls of the Nile fostered development within Gezira Island and along the city's waterfront. The metropolis began to encroach on the fertile Nile Delta, prompting the government to build desert satellite towns and devise incentives for city-dwellers to move to them.
After 1956.
In the second half of the 20th century, Cairo continue to grow enormously in both population and area. Between 1947 and 2006, the population of Greater Cairo went from 2,986,280 to 16,292,269. The population explosion also drove the rise of "informal" housing ("'ashwa'iyyat"), meaning housing that was built without any official planning or control. The exact form of this type of housing varies considerably but usually has a much higher population density than formal housing. By 2009, over 63% of the population of Greater Cairo lived in informal neighbourhoods, even though these occupied only 17% of the total area of Greater Cairo. According to economist David Sims, informal housing has the benefits of providing affordable accommodation and vibrant communities to huge numbers of Cairo's working classes, but it also suffers from government neglect, a relative lack of services, and overcrowding.
The "formal" city was also expanded. The most notable example was the creation of Madinat Nasr, a huge government-sponsored expansion of the city to the east which officially began in 1959 but was primarily developed in the mid-1970s. Starting in 1977 the Egyptian government established the New Urban Communities Authority to initiate and direct the development of new planned cities on the outskirts of Cairo, generally established on desert land. These new satellite cities were intended to provide housing, investment, and employment opportunities for the region's growing population as well as to pre-empt the further growth of informal neighbourhoods. As of 2014, about 10% of the population of Greater Cairo lived in the new cities.
Concurrently, Cairo established itself as a political and economic hub for North Africa and the Arab world, with many multinational businesses and organisations, including the Arab League, operating out of the city. In 1979 the historic districts of Cairo were listed as a UNESCO World Heritage Site.
In 1992, Cairo was hit by an earthquake causing 545 deaths, injuring 6,512 and leaving around 50,000 people homeless.
2011 Egyptian revolution.
Cairo's Tahrir Square was the focal point of the 2011 Egyptian revolution against former president Hosni Mubarak. More than 50,000 protesters first occupied the square on 25 January, during which the area's wireless services were reported to be impaired. In the following days Tahrir Square continued to be the primary destination for protests in Cairo. The uprising was mainly a campaign of non-violent civil resistance, which featured a series of demonstrations, marches, acts of civil disobedience, and labour strikes. Millions of protesters from a variety of socio-economic and religious backgrounds demanded the overthrow of the regime of Egyptian President Hosni Mubarak. Despite being predominantly peaceful in nature, the revolution was not without violent clashes between security forces and protesters, with at least 846 people killed and 6,000 injured. The uprising took place in Cairo, Alexandria, and in other cities in Egypt, following the Tunisian revolution that resulted in the overthrow of the long-time Tunisian president Zine El Abidine Ben Ali. On 11 February, following weeks of determined popular protest and pressure, Hosni Mubarak resigned from office.
Post-revolutionary Cairo.
Under the rule of President el-Sisi, in March 2015 plans were announced for another yet-unnamed planned city to be built further east of the existing satellite city of New Cairo, intended to serve as the new capital of Egypt.
Geography.
Cairo is located in northern Egypt, known as Lower Egypt, south of the Mediterranean Sea and west of the Gulf of Suez and Suez Canal. The city lies along the Nile River, immediately south of the point where the river leaves its desert-bound valley and branches into the low-lying Nile Delta region. Although the Cairo metropolis extends away from the Nile in all directions, the city of Cairo resides only on the east bank of the river and two islands within it on a total area of . Geologically, Cairo lies on alluvium and sand dunes which date from the quaternary period.
Until the mid-19th century, when the river was tamed by dams, levees, and other controls, the Nile in the vicinity of Cairo was highly susceptible to changes in course and surface level. Over the years, the Nile gradually shifted westward, providing the site between the eastern edge of the river and the Mokattam highlands on which the city now stands. The land on which Cairo was established in 969 (present-day Islamic Cairo) was located underwater just over three hundred years earlier, when Fustat was first built.
Low periods of the Nile during the 11th century continued to add to the landscape of Cairo; a new island, known as "Geziret al-Fil", first appeared in 1174, but eventually became connected to the mainland. Today, the site of "Geziret al-Fil" is occupied by the Shubra district. The low periods created another island at the turn of the 14th century that now composes Zamalek and Gezira. Land reclamation efforts by the Mamluks and Ottomans further contributed to expansion on the east bank of the river.
Because of the Nile's movement, the newer parts of the city—Garden City, Downtown Cairo, and Zamalek—are located closest to the riverbank. The areas, which are home to most of Cairo's embassies, are surrounded on the north, east, and south by the older parts of the city. Old Cairo, located south of the centre, holds the remnants of Fustat and the heart of Egypt's Coptic Christian community, Coptic Cairo. The Boulaq district, which lies in the northern part of the city, was born out of a major 16th-century port and is now a major industrial centre. The Citadel is located east of the city centre around Islamic Cairo, which dates back to the Fatimid era and the foundation of Cairo. While western Cairo is dominated by wide boulevards, open spaces, and modern architecture of European influence, the eastern half, having grown haphazardly over the centuries, is dominated by small lanes, crowded tenements, and Islamic architecture.
Northern and extreme eastern parts of Cairo, which include satellite towns, are among the most recent additions to the city, as they developed in the late-20th and early-21st centuries to accommodate the city's rapid growth. The western bank of the Nile is commonly included within the urban area of Cairo, but it composes the city of Giza and the Giza Governorate. Giza city has also undergone significant expansion over recent years, and today has a population of 2.7 million. The Cairo Governorate was just north of the Helwan Governorate from 2008 when some Cairo's southern districts, including Maadi and New Cairo, were split off and annexed into the new governorate, to 2011 when the Helwan Governorate was reincorporated into the Cairo Governorate.
According to the World Health Organization, the level of air pollution in Cairo is nearly 12 times higher than the recommended safety level.
Climate.
In Cairo, and along the Nile River Valley, the climate is a hot desert climate ("BWh" according to the Köppen climate classification system).
Wind storms can be frequent, bringing Saharan dust into the city, from March to May and the air often becomes uncomfortably dry. Winters are mild to warm, while summers are long and hot. High temperatures in winter range from , while night-time lows drop to below , often to . In summer, the highs often exceed but rarely surpass , and lows drop to about . Rainfall is sparse and only happens in the colder months, but sudden showers can cause severe flooding. The summer months have high humidity due to its proximity to the Mediterranean coast. Snowfall is extremely rare; a small amount of graupel, widely believed to be snow, fell on Cairo's easternmost suburbs on 13 December 2013, the first time Cairo's area received this kind of precipitation in many decades. Dew points in the hottest months range from in June to in August.
Metropolitan area and districts.
The city of Cairo forms part of Greater Cairo, the largest metropolitan area in Africa. While it has no administrative body, the Ministry of Planning considers it as an economic region consisting of Cairo Governorate, Giza Governorate, and Qalyubia Governorate. As a contiguous metropolitan area, various studies have considered Greater Cairo be composed of the administrative cities that are Cairo, Giza and Shubra al-Kheima, in addition to the satellite cities/new towns surrounding them.
Cairo is a city-state where the governor is also the head of the city. Cairo City itself differs from other Egyptian cities in that it has an extra administrative division between the city and district levels, and that is areas, which are headed by deputy governors. Cairo consists of 4 areas "(manatiq, singl. mantiqa)" divided into 38 districts "(ahya', singl. hayy)" and 46 qisms (police wards, 1-2 per district):
The Northern Area is divided into 8 Districts:
The Eastern Area divided into 9 Districts and three new cities:
The Western Area divided into 9 Districts:
The Southern Area divided into 12 Districts:
Satellite cities.
Since 1977 a number of new towns have been planned and built by the New Urban Communities Authority (NUCA) in the Eastern Desert around Cairo, ostensibly to accommodate additional population growth and development of the city and stem the development of self-built informal areas, especially over agricultural land. As of 2022 four new towns have been built and have residential populations: 15th of May City, Badr City, Shorouk City, and New Cairo. In addition, two more are under construction: the New Administrative Capital. And Capital Gardens, where land was allocated in 2021, and which will house most of the civil servants employed in the new capital.
Planned new capital.
In March 2015, plans were announced for a new city to be built east of Cairo, in an undeveloped area of the Cairo Governorate, which would serve as the New Administrative Capital of Egypt.
Demographics.
According to the 2017 census, Cairo had a population of 9,539,673 people, distributed across 46 qisms (police wards):
Religion.
The majority of Egypt and Cairo's population is Sunni Muslim. A significant Christian minority exists, among whom Coptic Orthodox are the majority. Precise numbers for each religious community in Egypt are not available and estimates vary. Other churches that have, or had, a presence in modern Cairo include the Catholic Church (including Armenian Catholic, Coptic Catholic, Chaldean Catholic, Syrian Catholic, and Maronite), the Greek Orthodox Church, the Evangelical Church of Egypt (Synod of the Nile), and some Protestant churches. Cairo has been the seat of the Coptic Orthodox Church since the 12th century, and the seat of the Coptic Orthodox Pope is located in Saint Mark's Coptic Orthodox Cathedral. Until the 20th century, Cairo had a sizeable Jewish community, but as of 2022 only three Jews were reported to be living in the city. A total of 12 synagogues in Cairo still exist.
Economy.
Cairo's economy has traditionally been based on governmental institutions and services, with the modern productive sector expanding in the 20th century to include developments in textiles and food processing – specifically the production of sugar cane. As of 2005, Egypt has the largest non-oil based GDP in the Arab world.
Cairo accounts for 11% of Egypt's population and 22% of its economy (PPP). The majority of the nation's commerce is generated there, or passes through the city. The great majority of publishing houses and media outlets and nearly all film studios are there, as are half of the nation's hospital beds and universities. This has fuelled rapid construction in the city, with one building in five being less than 15 years old.
This growth until recently surged well ahead of city services. Homes, roads, electricity, telephone and sewer services were all in short supply. Analysts trying to grasp the magnitude of the change coined terms like "hyper-urbanization".
Infrastructure.
Health.
Cairo, as well as neighbouring Giza, has been established as Egypt's main centre for medical treatment, and despite some exceptions, has the most advanced level of medical care in the country. Cairo's hospitals include the JCI-accredited As-Salaam International Hospital, Ain Shams University Hospital, Dar Al Fouad, Nile Badrawi Hospital, 57357 Hospital, as well as Qasr El Eyni Hospital.
Education.
Greater Cairo has long been the hub of education and educational services for Egypt and the region.
Today, Greater Cairo is the centre for many government offices governing the Egyptian educational system, has the largest number of educational schools, and higher education institutes among other cities and governorates of Egypt.
Some of the International Schools found in Cairo:
Universities in Greater Cairo:
Transport.
Cairo has an extensive road network, rail system, subway system and maritime services. Road transport is facilitated by personal vehicles, taxi cabs, privately owned public buses and microbuses. Cairo International Airport is the country's largest airport and one of the busiest airports in Africa.
Public transportation.
Cairo, specifically Ramses Station, is the centre of almost the entire Egyptian transportation network.
The Cairo Transportation Authority (CTA) manages Cairo's public transit. The subway system, the Cairo Metro, is a fast and efficient way of getting around Cairo. The metro network covers Helwan and other suburbs. It can get very crowded during rush hour. Two train cars (the fourth and fifth ones) are reserved for women only, although women may ride in any car they want.
Trams in Greater Cairo and Cairo trolleybus were used as modes of transportation, but were closed in the 1970s everywhere except Heliopolis and Helwan. These were shut down in 2014, after the Egyptian Revolution.
In 2017, plans to construct two monorail systems were announced, one linking 6th of October to suburban Giza, a distance of , and the other linking Nasr City to New Cairo, a distance of .
Roads.
Two trans-African automobile routes originate in Cairo: the Cairo-Cape Town Highway and the Cairo-Dakar Highway. An extensive road network connects Cairo with other Egyptian cities and villages. There is a new Ring Road that surrounds the outskirts of the city, with exits that reach outer Cairo districts. There are flyovers and bridges, such as the 6th October Bridge that, when the traffic is not heavy, allow fast means of transportation from one side of the city to the other.
Cairo traffic is known to be overwhelming and overcrowded. Traffic moves at a relatively fluid pace. Drivers tend to be aggressive, but are more courteous at junctions, taking turns going, with police aiding in traffic control of some congested areas.
Culture.
Cairo Opera House.
President Mubarak inaugurated the new Cairo Opera House of the Egyptian National Cultural Centres on 10 October 1988, 17 years after the Royal Opera House had been destroyed by fire. The National Cultural Centre was built with the help of JICA, the Japan International Co-operation Agency and stands as a prominent feature for the Japanese-Egyptian co-operation and the friendship between the two nations.
Khedivial Opera House.
The Khedivial Opera House, or Royal Opera House, was the original opera house in Cairo. It was dedicated on 1 November 1869 and burned down on 28 October 1971. After the original opera house was destroyed, Cairo was without an opera house for nearly two decades until the opening of the new Cairo Opera House in 1988.
Cairo International Film Festival.
Cairo held its first international film festival 16 August 1976, when the first Cairo International Film Festival was launched by the Egyptian Association of Film Writers and Critics, headed by Kamal El-Mallakh. The Association ran the festival for seven years until 1983.
This achievement lead to the President of the Festival again contacting the FIAPF with the request that a competition should be included at the 1991 Festival. The request was granted.
In 1998, the Festival took place under the presidency of one of Egypt's leading actors, Hussein Fahmy, who was appointed by the Minister of Culture, Farouk Hosni, after the death of Saad El-Din Wahba. Four years later, the journalist and writer Cherif El-Shoubashy became president.
Cairo Geniza.
The Cairo Geniza is an accumulation of almost 200,000 Jewish manuscripts that were found in the "genizah" of the Ben Ezra Synagogue (built 882) of Fustat, Egypt (now Old Cairo), the Basatin cemetery east of Old Cairo, and a number of old documents that were bought in Cairo in the later 19th century. These documents were written from about 870 to 1880 AD and have been archived in various American and European libraries. The Taylor-Schechter collection in the University of Cambridge runs to 140,000 manuscripts; a further 40,000 manuscripts are housed at the Jewish Theological Seminary of America.
Sports.
Football is the most popular sport in Egypt, and Cairo has sporting teams that compete in national and regional leagues, most notably Al Ahly and Zamalek SC, who were the CAF first and second African clubs of the 20th century. The annual match between Al Ahly and El Zamalek is one of the most watched sports events in Egypt. The teams form the major rivalry of Egyptian football. They play their home games at Cairo International Stadium, which is the second largest stadium in Egypt, as well as the largest in Cairo.
The Cairo International Stadium was built in 1960. Its multi-purpose sports complex houses the main football stadium, an indoor stadium, satellite fields that hold regional and continental games, including the African Games, U17 Football World Championship and the 2006 Africa Cup of Nations. Egypt later won the competition and the next edition in Ghana (2008) making the Egyptian and Ghanaian national teams the only to win the African Nations Cup back to back. Egypt won the title for a record six times in the history of African Continental Competition. This was followed by a third consecutive win in Angola in 2010, making Egypt the only country with a record 3-consecutive and 7-total Continental Football Competition winner. As of 2021, Egypt's national team is ranked #46 in the world by FIFA.
Cairo failed at the applicant stage when bidding for the 2008 Summer Olympics, which was hosted in Beijing. However, Cairo did host the 2007 Pan Arab Games.
There are other sports teams in the city that participate in several sports including Gezira Sporting Club, el Shams Club, Shooting Club, Heliopolis Sporting Club, and several smaller clubs. There are new sports clubs in the area of New Cairo (one hour far from Cairo's downtown), these are Al Zohour sporting club, Wadi Degla sporting club and Platinum Club.
Most of the sports federations of the country are located in the city suburbs, including the Egyptian Football Association. The headquarters of the Confederation of African Football (CAF) was previously located in Cairo, before relocating to its new headquarters in 6 October City, a small city away from Cairo's crowded districts. In 2008, the Egyptian Rugby Federation was officially formed and granted membership into the International Rugby Board.
Egypt is internationally known for the excellence of its squash players who excel in professional and junior divisions. Egypt has seven players in the top ten of the PSA men's world rankings, and three in the women's top ten. Mohamed El Shorbagy held the world number one position for more than a year. Nour El Sherbini has won the Women's World Championship twice and has been the women's world number one. On 30 April 2016, she became the youngest woman to win the Women's World Championship. In 2017 she retained her title.
Cairo is the official endpoint of Cross Egypt Challenge where its route ends yearly in the most sacred place in Egypt, under the Great Pyramids of Giza with a huge trophy-giving ceremony.
Cityscape and landmarks.
Tahrir Square.
Tahrir Square was founded during the mid 19th century with the establishment of modern downtown Cairo. It was first named Ismailia Square, after the 19th-century ruler Khedive Ismail, who commissioned the new downtown district's 'Paris on the Nile' design. After the Egyptian Revolution of 1919 the square became widely known as Tahrir (Liberation) Square, though it was not officially renamed as such until after the 1952 Revolution which eliminated the monarchy. Several notable buildings surround the square including, the American University in Cairo's downtown campus, the Mogamma governmental administrative Building, the headquarters of the Arab League, the Nile Ritz Carlton Hotel, and the Egyptian Museum. Being at the heart of Cairo, the square has witnessed several major protests over the years. However, the most notable event in the square was being the focal point of the 2011 Egyptian Revolution against former president Hosni Mubarak. In 2020 the government completed the erection of a new monument in the center of the square featuring an ancient obelisk from the reign of Ramses II, originally unearthed at Tanis (San al-Hagar) in 2019, and four ram-headed sphinx statues moved from Karnak.
Egyptian Museum.
The Museum of Egyptian Antiquities, known commonly as the Egyptian Museum, is home to the most extensive collection of ancient Egyptian antiquities in the world. It has 136,000 items on display, with many more hundreds of thousands in its basement storerooms. Among the collections on display are the finds from the tomb of Tutankhamun.
Grand Egyptian Museum.
Much of the collection of the Museum of Egyptian Antiquities, including the Tutankhamun collection, are slated to be moved to the new Grand Egyptian Museum, under construction in Giza and was due to open by the end of 2020.
Cairo Tower.
The Cairo Tower is a free-standing tower with a revolving restaurant at the top. It is one of Cairo's landmarks and provides a bird's eye view of the city to restaurant patrons. It stands in the Zamalek district on Gezira Island on the Nile River, in the city centre. At , it is higher than the Great Pyramid of Giza, which stands some to the southwest.
Old Cairo.
This area of Cairo is so-named as it contains the remains of the ancient Roman fortress of Babylon and also overlaps the original site of Fustat, the first Arab settlement in Egypt (7th century AD) and the predecessor of later Cairo. The area includes Coptic Cairo, which holds a high concentration of old Christian churches such as the Hanging Church, the Greek Orthodox Church of St. George, and other Christian or Coptic buildings, most of which are located in an enclave on the site of the ancient Roman fortress. It is also the location of the Coptic Museum, which showcases the history of Coptic art from Greco-Roman to Islamic times, and of the Ben Ezra Synagogue, the oldest and best-known synagogue in Cairo, where the important collection of Geniza documents were discovered in the 19th century.
To the north of this Coptic enclave is the Amr ibn al-'As Mosque, the first mosque in Egypt and the most important religious centre of what was formerly Fustat, founded in 642 AD right after the Arab conquest but rebuilt many times since. A part of the former city of Fustat has also been excavated to the east of the mosque and of the Coptic enclave, although the archeological site is threatened by encroaching construction and modern development. To the northwest of Babylon Fortress and the mosque is the Monastery of Saint Mercurius (or "Dayr Abu Sayfayn"), an important and historic Coptic religious complex consisting of the Church of Saint Mercurius, the Church of Saint Shenute, and the Church of the Virgin (also known as "al-Damshiriya"). Several other historic churches are also situated to the south of Babylon Fortress.
Islamic Cairo.
Cairo holds one of the greatest concentrations of historical monuments of Islamic architecture in the world. The areas around the old walled city and around the Citadel are characterized by hundreds of mosques, tombs, madrasas, mansions, caravanserais, and fortifications dating from the Islamic era and are often referred to as "Islamic Cairo", especially in English travel literature. It is also the location of several important religious shrines such as the al-Hussein Mosque (whose shrine is believed to hold the head of Husayn ibn Ali), the Mausoleum of Imam al-Shafi'i (founder of the Shafi'i "madhhab", one of the primary schools of thought in Sunni Islamic jurisprudence), the Tomb of Sayyida Ruqayya, the Mosque of Sayyida Nafisa, and others.
The first mosque in Egypt was the Mosque of Amr ibn al-As in what was formerly Fustat, the first Arab-Muslim settlement in the area. However, the Mosque of Ibn Tulun is the oldest mosque that still retains its original form and is a rare example of Abbasid architecture from the classical period of Islamic civilization. It was built in 876–879 AD in a style inspired by the Abbasid capital of Samarra in Iraq. It is one of the largest mosques in Cairo and is often cited as one of the most beautiful. Another Abbasid construction, the Nilometer on Roda Island, is the oldest original structure in Cairo, built in 862 AD. It was designed to measure the level of the Nile, which was important for agricultural and administrative purposes.
The settlement that was formally named Cairo (Arabic: "al-Qahira") was founded to the northeast of Fustat in 959 AD by the victorious Fatimid army. The Fatimids built it as a separate palatial city which contained their palaces and institutions of government. It was enclosed by a circuit of walls, which were rebuilt in stone in the late 11th century AD by the vizier Badr al-Gamali, parts of which survive today at Bab Zuwayla in the south and Bab al-Futuh and Bab al-Nasr in the north. Among the extant monuments from the Fatimid era are the large Mosque of al-Hakim, the Aqmar Mosque, Juyushi Mosque, Lulua Mosque, and the Mosque of Al-Salih Tala'i.
One of the most important and lasting institutions founded in the Fatimid period was the Mosque of al-Azhar, founded in 970 AD, which competes with the al-Qarawiyyin in Fes for the title of oldest university in the world. Today, al-Azhar University is the foremost Center of Islamic learning in the world and one of Egypt's largest universities with campuses across the country. The mosque itself retains significant Fatimid elements but has been added to and expanded in subsequent centuries, notably by the Mamluk sultans Qaytbay and al-Ghuri and by Abd al-Rahman Katkhuda in the 18th century.
The most prominent architectural heritage of medieval Cairo, however, dates from the Mamluk period, from 1250 to 1517 AD. The Mamluk sultans and elites were eager patrons of religious and scholarly life, commonly building religious or funerary complexes whose functions could include a mosque, madrasa, khanqah (for Sufis), a sabil (water dispensary), and a mausoleum for themselves and their families. Among the best-known examples of Mamluk monuments in Cairo are the huge Mosque-Madrasa of Sultan Hasan, the Mosque of Amir al-Maridani, the Mosque of Sultan al-Mu'ayyad (whose twin minarets were built above the gate of Bab Zuwayla), the Sultan Al-Ghuri complex, the funerary complex of Sultan Qaytbay in the Northern Cemetery, and the trio of monuments in the Bayn al-Qasrayn area comprising the complex of Sultan al-Mansur Qalawun, the Madrasa of al-Nasir Muhammad, and the Madrasa of Sultan Barquq. Some mosques include spolia (often columns or capitals) from earlier buildings built by the Romans, Byzantines, or Copts.
The Mamluks, and the later Ottomans, also built "wikala"s or caravanserais to house merchants and goods due to the important role of trade and commerce in Cairo's economy. Still intact today is the Wikala al-Ghuri, which today hosts regular performances by the Al-Tannoura Egyptian Heritage Dance Troupe. The Khan al-Khalili is a commercial hub which also integrated caravanserais (also known as "khan"s).
Citadel of Cairo.
The Citadel is a fortified enclosure begun by Salah al-Din in 1176 AD on an outcrop of the Muqattam Hills as part of a large defensive system to protect both Cairo to the north and Fustat to the southwest. It was the centre of Egyptian government and residence of its rulers until 1874, when Khedive Isma'il moved to 'Abdin Palace. It is still occupied by the military today, but is now open as a tourist attraction comprising, notably, the National Military Museum, the 14th century Mosque of al-Nasir Muhammad, and the 19th century Mosque of Muhammad Ali which commands a dominant position on Cairo's skyline.
Khan el-Khalili.
Khan el-Khalili is an ancient bazaar, or marketplace adjacent to the Al-Hussein Mosque. It dates back to 1385, when Amir Jarkas el-Khalili built a large caravanserai, or khan. (A caravanserai is a hotel for traders, and usually the focal point for any surrounding area.) This original caravanserai building was demolished by Sultan al-Ghuri, who rebuilt it as a new commercial complex in the early 16th century, forming the basis for the network of souqs existing today. Many medieval elements remain today, including the ornate Mamluk-style gateways. Today, Khan el-Khalili is a major tourist attraction and popular stop for tour groups.
Society.
In the present day, Cairo is a heavily urbanized city. Because of the influx of people into the city, lone standing houses are rare, and apartment buildings accommodate for the limited space and abundance of people. Single detached houses are usually owned by the wealthy. Formal education is also seen as important, with twelve years of standard formal education. Cairenes can take a standardized test similar to the SAT to be accepted to an institution of higher learning, but most children do not finish school and opt to pick up a trade to enter the workforce. Egypt still struggles with poverty, with almost half the population living on $2 or less a day.
Women's rights.
The civil rights movement for women in Cairo – and by extent, Egypt – has been a struggle for years. Women are reported to face constant discrimination, sexual harassment, and abuse throughout Cairo. A 2013 UN study found that over 99% of Egyptian women reported experiencing sexual harassment at some point in their lives. The problem has persisted in spite of new national laws since 2014 defining and criminalizing sexual harassment. The situation is so severe that in 2017, Cairo was named by one poll as the most dangerous megacity for women in the world. In 2020, the social media account "Assault Police" began to name and shame perpetrators of violence against women, in an effort to dissuade potential offenders. The account was founded by student Nadeen Ashraf, who is credited for instigating an iteration of the #MeToo movement in Egypt.
Pollution.
The air pollution in Cairo is a matter of serious concern. Greater Cairo's volatile aromatic hydrocarbon levels are higher than many other similar cities. Air quality measurements in Cairo have also been recording dangerous levels of lead, carbon dioxide, sulphur dioxide, and suspended particulate matter concentrations due to decades of unregulated vehicle emissions, urban industrial operations, and chaff and trash burning. There are over 4,500,000 cars on the streets of Cairo, 60% of which are over 10 years old, and therefore lack modern emission cutting features. Cairo has a very poor dispersion factor because of its lack of rain and its layout of tall buildings and narrow streets, which create a bowl effect.
In recent years, a black cloud (as Egyptians refer to it) of smog has appeared over Cairo every autumn due to temperature inversion. Smog causes serious respiratory diseases and eye irritations for the city's citizens. Tourists who are not familiar with such high levels of pollution must take extra care.
Cairo also has many unregistered lead and copper smelters which heavily pollute the city. The results of this has been a permanent haze over the city with particulate matter in the air reaching over three times normal levels. It is estimated that 10,000 to 25,000 people a year in Cairo die due to air pollution-related diseases. Lead has been shown to cause harm to the central nervous system and neurotoxicity particularly in children. In 1995, the first environmental acts were introduced and the situation has seen some improvement with 36 air monitoring stations and emissions tests on cars. Twenty thousand buses have also been commissioned to the city to improve congestion levels, which are very high.
The city also suffers from a high level of land pollution. Cairo produces 10,000 tons of waste material each day, 4,000 tons of which are not collected or managed. This is a huge health hazard, and the Egyptian Government is looking for ways to combat this. The Cairo Cleaning and Beautification Agency was founded to collect and recycle the waste; they work with the Zabbaleen community that has been collecting and recycling Cairo's waste since the turn of the 20th century and live in an area known locally as Manshiyat naser. Both are working together to pick up as much waste as possible within the city limits, though it remains a pressing problem.
International relations.
The Headquarters of the Arab League is located at Tahrir Square in downtown Cairo.
Twin towns – sister cities.
Cairo is twinned with:
|
6295
|
18872885
|
https://en.wikipedia.org/wiki?curid=6295
|
Chaos theory
|
Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause or prevent a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
Introduction.
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaotic dynamics.
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Sensitivity to initial conditions.
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled "Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?". The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled "The Essence of Chaos", published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation formula_1, the two trajectories end up diverging at a rate given by
formula_2
where formula_3 is the time and formula_4 is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE, coupled with the solution's boundedness, is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity.
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing.
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity.
A map formula_5 is said to be topologically transitive if for any pair of non-empty open sets formula_6, there exists formula_7 such that formula_8. Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point "x" and a region "V", there exists a point "y" near "x" whose orbit passes through "V". This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if "X" is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in "X" that have dense orbits.
Density of periodic orbits.
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by "x" → 4 "x" (1 – "x") is one of the simplest systems with density of periodic orbits. For example, formula_9 → formula_10 → formula_9 (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors.
Some dynamical systems, like the one-dimensional logistic map defined by "x" → 4 "x" (1 – "x"), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Coexisting attractors.
In contrast to single type chaotic solutions, studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
Minimum complexity of a chaotic system.
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
formula_12
where formula_13, formula_14, and formula_15 make up the system state, formula_3 is time, and formula_17, formula_18, formula_19 are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
Chaos and linear systems.
Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in functional analysis.
Quantum mechanics is also often considered as a prime example of linear non chaotic theory, that dampens out chaotic behaviour in the same manner that viscosity dampens out turbulence, this is actually not the case for quantum mechanical systems with infinite degrees of freedom, such as strongly correlated systems that do exhibit forms of nano scale turbulence.
Other characteristics of Chaos.
Infinite dimensional maps.
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
formula_20,
where kernel formula_21 is propagator derived as Green function of a relevant physical system,
formula_22 might be logistic map alike formula_23 or complex map. For examples of complex maps the Julia set formula_24 or Ikeda map
formula_25 may serve. When wave propagation problems at distance formula_26 with wavelength formula_27 are considered the kernel formula_28 may have a form of Green function for Schrödinger equation:.
formula_29.
Spontaneous order.
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
Combinatorial (or complex) chaos.
There are also definitions of chaos that don't require the sensitivity on initial conditions property, such as combinatorial chaos (I.e. applying recursively a discrete combinatorial action). This is also comparable and similar to chaos generated by cellular automata. This is important because this type of chaos it's also equivalent to a turing machine, you can execute computation with such dynamical systems, and as such the halting problem is not decidable, therefore some computational algorithms may never end. This is ultimately a very different way for a system to be unpredictable.
History.
James Clerk Maxwell the first scientist to emphasize the importance of initial conditions and he is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. In the 1880s, while studying the three-body problem, Henri Poincaré found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Experimentalists and mathematicians had encountered turbulence in fluid motion, chaotic behaviour in society and economy, nonperiodic oscillation in radio circuits and fractal patterns in nature without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory which is smooth and continuous, and which was the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map which has jump and erratic behaviours. Both of these observations underline the connection of chaos to either stochastic or non-linear dynamical systems, but definitely non-differentiable and non-continuos time evolution.
What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or "snowflake", which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published "The Fractal Geometry of Nature", which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in "Physical Review Letters" describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published "", which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in "The Structure of Scientific Revolutions" (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
A popular but inaccurate analogy for chaos.
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
<poem style="margin-left: 2em;">
For want of a nail, the shoe was lost.
For want of a shoe, the horse was lost.
For want of a horse, the rider was lost.
For want of a rider, the battle was lost.
For want of a battle, the kingdom was lost.
And all for the want of a horseshoe nail.
</poem>
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. The characteristic of the aforementioned verse was described as "finite-time sensitive dependence".
Applications.
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
Cryptography.
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
Robotics.
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
Biology.
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.
Economics.
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able to detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
Finite predictability in weather and climate.
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
AI-extended modeling framework.
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
Other areas.
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
See also.
Examples of chaotic systems
Other related topics
People
|
6298
|
1108604
|
https://en.wikipedia.org/wiki?curid=6298
|
Cupola
|
In architecture, a cupola () is a relatively small, usually dome-like structure on top of a building often crowning a larger roof or dome. Cupolas often serve as a roof lantern to admit light and air or as a lookout.
The word derives, via Italian, from lower Latin "cupula" (classical Latin "cupella"), (Latin "cupa"), indicating a vault resembling an upside-down cup.
The cylindrical drum underneath a larger cupola is called a tholobate.
Background.
The cupola evolved during the Renaissance from the older oculus. Being weatherproof, the cupola was better suited to the wetter climates of northern Europe. The chhatri, seen in Indian architecture, fits the definition of a cupola when it is used atop a larger structure.
Cupolas often serve as a belfry, belvedere, or roof lantern above a main roof. In other cases they may crown a spire, tower, or turret. Barns often have cupolas for ventilation.
Cupolas can also appear as small buildings in their own right.
The square, dome-like segment of a North American railroad train caboose that contains the second-level or "angel" seats is also called a cupola.
On armoured vehicles.
The term cupola can also refer to the protrusions atop an armoured fighting vehicle due to their distinctive dome-like appearance. They allow crew or personnel to observe, offering very good all round vision, or even field weaponry, without being exposed to incoming fire. Later designs, however, became progressively flatter and less prominent as technology evolved to allow designers to reduce the profile of their vehicles.
|
6299
|
28481209
|
https://en.wikipedia.org/wiki?curid=6299
|
Chupacabra
|
The chupacabra or chupacabras (, literally 'goat-sucker', from , 'sucks', and , 'goats') is a legendary creature, or cryptid, in the folklore of parts of the Americas. The name comes from the animal's purported vampirism the chupacabra is said to attack and drink the blood of livestock, including goats.
Physical descriptions of the creature vary. In Puerto Rico and in Hispanic America it is generally described as a heavy creature, reptilian and alien-like, roughly the size of a small bear, and with a row of spines reaching from the neck to the base of the tail, while in the Southwestern United States it is depicted as more dog-like.
Initial sightings and accompanying descriptions first occurred in Puerto Rico in 1995. The creature has since been reported as far north as Maine, as far south as Chile, and even outside the Americas in countries like Russia and the Philippines. All of the reports are anecdotal and have been disregarded as uncorroborated or lacking evidence. Sightings in northern Mexico and the Southern United States have been verified as canids afflicted by mange.
Name.
can be literally translated as 'goat-sucker', from ('to suck') and ('goats'). It is known as both and throughout the Americas, with the former being the original name, and the latter a regularization. The name is attributed to Puerto Rican comedian Silverio Pérez, who coined the label in 1995 while commenting on the attacks as a San Juan radio deejay.
History.
In 1975, a series of livestock killings in the small town of Moca, Puerto Rico were attributed to ('the vampire of Moca'). Initially, it was suspected that the killings were committed by a Satanic cult; later more killings were reported around the island, and many farms reported loss of animal life. Each of the animals was reported to have had its body bled dry through a series of small circular incisions.
The first reported attack eventually attributed to the actual chupacabras occurred in March 1995. Eight sheep were discovered dead in Puerto Rico, each with three puncture wounds in the chest area and reportedly completely drained of blood. A few months later, in August, an eyewitness named Madelyne Tolentino reported seeing the creature in the Puerto Rican town of Canóvanas, where as many as 150 farm animals and pets were reportedly killed.
Puerto Rican comedian and entrepreneur Silverio Pérez is credited with coining the term soon after the first incidents were reported in the press. Shortly after the first reported incidents in Puerto Rico, other animal deaths were reported in other countries, such as Argentina, Bolivia, Brazil, Chile, Colombia, Dominican Republic, El Salvador, Honduras, Mexico, Nicaragua, Panama, Peru, and the United States.
In 2019 a video recorded by showed the results of a supposed attack on chickens in the Seburuquillo sector of Lares, Puerto Rico.
Reputed origin.
A five-year investigation by Benjamin Radford, documented in his 2011 book "Tracking the Chupacabra", concluded that the description given by the original eyewitness in Puerto Rico, Madelyne Tolentino, was based on the creature Sil in the 1995 science-fiction horror film "Species". The alien creature Sil is nearly identical to Tolentino's chupacabra eyewitness account and she had seen the movie before her report: "It was a creature that looked like the chupacabra, with spines on its back and all... The resemblance to the chupacabra was really impressive", Tolentino reported. Radford revealed that Tolentino "believed that the creatures and events she saw in "Species" were happening in reality in Puerto Rico at the time", and therefore concludes that "the most important chupacabra description cannot be trusted". This, Radford believes, seriously undermines the credibility of the chupacabra as a real animal.
The reports of blood-sucking by the chupacabra were never confirmed by a necropsy, the only way to conclude that the animal was drained of blood. Dr. David Morales, a Puerto Rican veterinarian with the Department of Agriculture, analyzed 300 reported victims of the chupacabra and found that they had not been bled dry.
Radford divided the chupacabra reports into two categories: the reports from Puerto Rico and Latin America, where animals were attacked and it is supposed their blood was extracted; and the reports in the United States of mammals, mostly dogs and coyotes with mange, that people call "chupacabra" due to their unusual appearance.
In 2010, University of Michigan biologist Barry O'Connor concluded that all the chupacabra reports in the United States were simply coyotes infected with the parasite "Sarcoptes scabiei", whose symptoms would explain most of the features of the chupacabra: they would be left with little fur, thickened skin, and a rank odor. O'Connor theorized that the attacks on goats occurred "because these animals are greatly weakened, [so] they're going to have a hard time hunting. So they may be forced into attacking livestock because it's easier than running down a rabbit or a deer." Both dogs and coyotes can kill and not consume the prey, either because they are inexperienced, or due to injury or difficulty in killing the prey. The prey can survive the attack and die afterwards from internal bleeding or circulatory shock. The presence of two holes in the neck, corresponding with the canine teeth, are to be expected since this is the only way that most land carnivores have to catch their prey. There are reports of stray Mexican hairless dogs being mistaken for chupacabras.
Appearance.
The most common description of the chupacabra is that of a reptile-like creature, said to have leathery or scaly greenish-gray skin and sharp spines or quills running down its back. It is said to be approximately high, and stands and hops in a fashion similar to that of a kangaroo. This description was the chief one given to the few Puerto Rican reports in 1995 that claimed to have sighted the creature, with similar reports in parts of Chile and Argentina following.
Another common description of the chupacabra is of a strange breed of wild dog. This form is mostly hairless and has a pronounced spinal ridge, unusually pronounced eye sockets, fangs, and claws. This description started to appear in the early 2000s from reports trailing north from the Yucatán Peninsula, northern Mexico, and then into the United States; becoming the predominant description since. Unlike conventional predators, the chupacabra is said to drain all of the animal's blood (and sometimes organs) usually through three holes in the shape of a downwards-pointing triangle, but sometimes through only one or two holes.
Plausibility of existence.
The chupacabra panic first started in late 1995, Puerto Rico: farmers were mass reporting the mysterious killings of various livestock. In these reports, the farmers recalled two puncture wounds on the animal carcasses. Chupacabra killings were soon associated with a seemingly untouched animal carcass other than puncture wounds which were said to be used to suck the blood out of the victim. Reports of such killings began to spread around and eventually out of the country, reaching areas such as Mexico, Brazil, Chile, and the Southern area of the United States.
Most notably, these areas experience frequent, and extreme dry seasons; in the cases of the Puerto Rican reports of 1995 and the Mexican reports of 1996, both countries were currently experiencing or dealing with the aftermath of severe droughts. Investigations carried out in both countries at this time noted a certain dramatic violence in these killings. These environmental conditions could provide a simple explanation for the livestock killings: wild predators losing their usual prey to the drought, therefore being forced to hunt the livestock of farmers for sustenance. Thus, the same theory can be applied to many of the other 'chupacabra' attacks: that the dry weather had created a more competitive environment for native predators, leading them to prey on livestock to survive. Such an idea can also explain the increased violence in the killings; hungry and desperate predators are driven to hunt livestock to avoid starvation, causing an increase in both the number of livestock killings, and the viciousness of each one.
Evidence of such is provided in page 179 of Benjamin Radford's book, "Tracking the Chupacabra: The Vampire Beast in Fact, Fiction, and Folklore." Radford's chart highlights ten significant reports of chupacabra attacks, seven of which had a carcass recovered and examined; these autopsies concluded the causes of death as various animal attacks, as displayed though the animal DNA found on the carcasses. Radford provides further evidence in pages 161-162 of his book, displaying animals who are proven to have fallen victim to regular coyote attacks; thus, explaining that it is not unusual for an animal carcass to be left uneaten while only displaying puncture wounds and/or minimal signs of attack.
The plausibility of the chupacabra's existence is also discredited by the varying descriptions of the creature. Depending on the reported sighting, the creature is described with thick skin or fur, wings or no wings, a long tail or no tail, is bat-like, dog-like, or even alien-like. Evidently, the chupacabra has a wide variety of descriptions; to the point where it is hard to believe that all the sightings are of the same creature. A very likely explanation for this phenomenon is that individuals who had heard of the newly popular chupacabra had the creature's name fresh in their mind before they happened to see a strange looking animal. They then resort to make sense of their encounter by labelling it as the recently 'discovered' monster, instead of a more realistic explanation. For example, some scientists hypothesize that what many believe to be a chupacabra is a wild or domestic dog affected by mange, a disease causing a thick buildup of skin and hair loss.
Related legends.
The "Ozark Howler", a large bear-like animal, is the subject of a similar legend.
The Peuchens of Chile also share similarities in their supposed habits, but instead of being dog-like they are described as winged snakes. This legend may have originated from the vampire bat, an animal endemic to the region.
In the Philippines the Sigbin shares many of the chupacabra's descriptions.
In 2018 there were reports of suspected chupacabras in Manipur, India. Many domestic animals and poultry were killed in a manner similar to other chupacabra attacks, and several people reported that they had seen creatures. Forensic experts opined that street dogs were responsible for mass killing of domestic animals and poultry after studying the remnants of a corpse.
|
6309
|
44217690
|
https://en.wikipedia.org/wiki?curid=6309
|
Cayuga Lake
|
Cayuga Lake (, or ) is the longest of central New York's glacial Finger Lakes, and is the second largest in surface area (marginally smaller than Seneca Lake) and second largest in volume. It is just under long. Its average width is , and it is at its widest point, near Aurora. It is approximately at its deepest point, and has over of shoreline.
The lake is named after the indigenous Cayuga people.
Location.
The city of Ithaca, site of Ithaca College and Cornell University, is located at the southern end of Cayuga Lake.
On the northern shore rests Seneca Falls, the historical Birthplace of Women's Rights and the Seneca Falls Convention, and what is widely accepted as the real Bedford Falls from the Frank Capra movie It's A Wonderful Life. The Town of Seneca Falls comprises 25.3 square miles and is at the northern tip of Cayuga Lake. It is one of ten townships in Seneca County and its largest community, with approximately 8,650 residents.
Villages and settlements along the east shore of Cayuga Lake include Myers, King Ferry, Aurora, Levanna, Union Springs, and Cayuga. Settlements along the west shore of the lake include Sheldrake, Poplar Beach, and Canoga.
The lake has two small islands. One is near Union Springs, called Frontenac Island (northeast); this island is not inhabited. The other island, Canoga Island (northwest), is located near the town of Canoga. This island has several camps and is inhabited during the summer months. The only other island in any of the Finger Lakes is Skenoh Island in Canandaigua Lake.
Geographical characteristics.
The lake depth, with steep east and west sides and shallow north and south ends, is typical of the Finger Lakes, as they were carved by glaciers during the last ice age.
The water level is regulated by the Mud Lock at the north end of the lake. It is connected to Lake Ontario by the Erie Canal and Seneca Lake by the Seneca River. The lake is drawn down as winter approaches, to minimize ice damage and to maximize its capacity to store heavy spring runoff.
The north end is dominated by shallow mudflats. An important stopover for migratory birds, the mudflats and marsh are the location of the Montezuma National Wildlife Refuge. The southern end is also shallow and often freezes during the winter.
Human impact.
Cayuga Lake is very popular among recreational boaters. The Allan H. Treman State Marine Park, with a large state marina and boat launch, is located at the southern end of the lake in Ithaca. There are two yacht clubs on the western shore: Ithaca Yacht Club, a few miles north of Ithaca, and Red Jacket Yacht Club, just south of Canoga. There are several other marinas and boat launches, scattered along the lake shore.
Cayuga Lake is the source of drinking water for several communities, including Lansing, near the southern end of the lake along the east side, which draws water through the Bolton Point Water System. There are also several lake source cooling systems that are in operation on the lake, whereby cooler water is pumped from the depths of the lake, warmed, and circulated in a closed system back to the surface. One of these systems, which is operated by Cornell University and began operation in 2000, was controversial during the planning and building stages, due to its potential for having a negative environmental impact. However, all of the environmental impact reports and scientific studies have shown that the Cornell lake source cooling system has not yet had, and will not likely have any measurably significant environmental impact. Furthermore, Cornell's system pumps significantly less warm water back into the lake than others further north, which have been operating for decades, including the coal-fired power plant on the eastern shore.
The AES Coal Power plant was shut down in August 2019, and there are plans to convert it into a data center in the near future. The plant used to use Cayuga Lake as a cooling source. In the late 1960s, citizens successfully opposed the construction of an 830-MW nuclear power plant on the shore of Cayuga Lake.
Rod Serling named his production company Cayuga Productions, during the years of his TV series, "The Twilight Zone". Serling and his family had a summer home at Cayuga Lake.
Fishing.
The fish population is managed, and substantial sport fishing is practiced, with anglers targeting smelt, lake trout and smallmouth bass. Fish species present in the lake include lake trout, landlocked salmon, brown trout, rainbow trout, smallmouth bass, smelt, alewife, atlantic salmon, black crappie, bluegill, pickerel, largemouth bass, northern pike, pumpkinseed sunfish, rock bass, and yellow perch. The round goby has been an invasive species in the lake since the 1990s. There are state-owned hard surface ramps in Cayuga–Seneca Canal, Lock #1 (Mud Lock), Long Point State Park, Cayuga Lake State Park, Deans Cove Boat Launch, Taughannock Falls State Park, and Allan H. Treman State Marine Park.
Tributaries.
The major inflows to the lake are: Fall Creek, Cayuga Inlet, Salmon Creek, Taughannock Creek, and Six Mile Creek; while the lake outflows into the Seneca River and other tributaries. Ungaged tributaries that flow to the lake include:
Folklore.
The lake is the subject of local folklore.
An "Ithaca Journal" article of 5 January 1897, reported that a sea serpent, nicknamed "Old Greeny," had been sighted in Cayuga Lake annually for 69 years. A sighting in that month described the animal, from shore, as "large and its body long", although a "tramp" suggested it was a muskrat. In 1929, two creatures, about in length, were reportedly spotted along the eastern shore of the lake. Further sightings were reported in 1974 and 1979.
Cornell's alma mater makes reference to its position "Far Above Cayuga's Waters", while that of Ithaca College references "Cayuga's shore".
A tradition at Wells College in Aurora, NY, held that if the lake completely freezes over, classes are canceled, though for only one day. According to Wells College records, this happened eight times, in "1875, 1912, 1918, 1934, 1948, 1962, 1979 and 2015."
Cayuga Lake, like nearby Seneca Lake, is also the site of a phenomenon known as the Guns of the Seneca, mysterious cannon-like booms heard in the surrounding area. Many of these booms may be attributable to bird-scarers, automated cannon-like devices used by farmers to scare birds away from the many vineyards, orchards and crops. There is, however, no proof of this.
Wine.
Cayuga Lake is included in the American Viticultural Area with which it shares its name. Established in 1988, the AVA now boasts over a dozen wineries, four distilleries, a cidery, and a meadery.
|
6310
|
7903804
|
https://en.wikipedia.org/wiki?curid=6310
|
Columbia University
|
Columbia University in the City of New York, commonly referred to as Columbia University, is a private Ivy League research university in New York City. Established in 1754 as King's College on the grounds of Trinity Church in Manhattan, it is the oldest institution of higher education in New York and the fifth-oldest in the United States.
Columbia was established as a colonial college by royal charter under George II of Great Britain. It was renamed Columbia College in 1784 following the American Revolution, and in 1787 was placed under a private board of trustees headed by former students Alexander Hamilton and John Jay. In 1896, the campus was moved to its current location in Morningside Heights and renamed Columbia University.
Columbia is organized into twenty schools, including four undergraduate schools and 16 graduate schools. The university's research efforts include the Lamont–Doherty Earth Observatory, the Goddard Institute for Space Studies, and accelerator laboratories with Big Tech firms such as Amazon and IBM. Columbia is a founding member of the Association of American Universities and was the first school in the United States to grant the MD degree. The university also administers and annually awards the Pulitzer Prize.
Columbia scientists and scholars have played a pivotal role in scientific breakthroughs including brain–computer interface; the laser and maser; nuclear magnetic resonance; the first nuclear pile; the first nuclear fission reaction in the Americas; the first evidence for plate tectonics and continental drift; and much of the initial research and planning for the Manhattan Project during World War II.
, its alumni, faculty, and staff have included 7 of the Founding Fathers of the United States of America; 4 U.S. presidents; 34 foreign heads of state or government; 2 secretaries-general of the United Nations; 10 justices of the United States Supreme Court; 103 Nobel laureates; 125 National Academy of Sciences members; 53 living billionaires; 23 Olympic medalists; 33 Academy Award winners; and 125 Pulitzer Prize recipients.
History.
18th century.
Discussions regarding the founding of a college in the Province of New York began as early as 1704.
Classes were initially held in July 1754 and were presided over by the college's first president, Samuel Johnson who was an Anglican Priest. The college was officially founded on October 31, 1754, as King's College by royal charter of George II, making it the oldest institution of higher learning in the State of New York and the fifth oldest in the United States.
In 1763, Johnson was succeeded in the presidency by Myles Cooper, a graduate of The Queen's College, Oxford, and an ardent Tory. In the charged political climate of the American Revolution, his chief opponent in discussions at the college was an undergraduate of the class of 1777, Alexander Hamilton. The Irish anatomist, Samuel Clossy, was appointed professor of natural philosophy in October 1765 and later the college's first professor of anatomy in 1767.
The American Revolutionary War broke out in 1776, and was catastrophic for the operation of King's College, which suspended instruction for eight years beginning in 1776 with the arrival of the Continental Army. The suspension continued through the military occupation of New York City by British troops until their departure in 1783. The college's library was looted and its sole building requisitioned for use as a military hospital first by American and then British forces.
The legislature agreed to assist the college, and on May 1, 1784, it passed "an Act for granting certain privileges to the College heretofore called King's College". The Act created a board of regents to oversee the resuscitation of King's College, and, in an effort to demonstrate its support for the new Republic, the legislature stipulated that "the College within the City of New York heretofore called King's College be forever hereafter called and known by the name of Columbia College", a reference to Columbia, an alternative name for America which in turn comes from the name of Christopher Columbus. The Regents finally became aware of the college's defective constitution in February 1787 and appointed a revision committee, which was headed by John Jay and Alexander Hamilton. In April of that same year, a new charter was adopted for the college granted the power to a separate board of 24 trustees.
For a period in the 1790s, with New York City as the federal and state capital and the country under successive Federalist governments, a revived Columbia thrived under the auspices of Federalists such as Hamilton and Jay. President George Washington and Vice President John Adams, in addition to both houses of Congress attended the college's commencement on May 6, 1789, as a tribute of honor to the many alumni of the school who had been involved in the American Revolution.
19th century.
In November 1813, the college agreed to incorporate its medical school with The College of Physicians and Surgeons, a new school created by the Regents of New York, forming Columbia University College of Physicians and Surgeons. In 1857, the college moved from the King's College campus at Park Place to a primarily Gothic Revival campus on 49th Street and Madison Avenue, where it remained for the next forty years.
During the last half of the 19th century, under the presidency of Frederick A. P. Barnard, for whom Barnard College is named, the institution rapidly assumed the shape of a modern university. Barnard College was created in 1889 as a response to the university's refusal to accept women.
In 1896, university president Seth Low moved the campus from 49th Street to its present location, a more spacious campus in the developing neighborhood of Morningside Heights. Under the leadership of Low's successor, Nicholas Murray Butler, who served for over four decades, Columbia rapidly became the nation's major institution for research, setting the multiversity model that later universities would adopt. Prior to becoming the president of Columbia University, Butler founded Teachers College, as a school to prepare home economists and manual art teachers for the children of the poor, with philanthropist Grace Hoadley Dodge. Teachers College is currently affiliated as the university's Graduate School of Education.
20th century.
In the 1940s, faculty members, including John R. Dunning, I. I. Rabi, Enrico Fermi, and Polykarp Kusch, began what became the Manhattan Project, creating the first nuclear fission reactor in the Americas and researching gaseous diffusion.
In 1928, Seth Low Junior College was established by Columbia University in order to mitigate the number of Jewish applicants to Columbia College. The college was closed in 1936 due to the adverse effects of the Great Depression and its students were subsequently taught at Morningside Heights, although they did not belong to any college but to the university at large. There was an evening school called University Extension, which taught night classes, for a fee, to anyone willing to attend.
In 1947, the program was reorganized as an undergraduate college and designated the School of General Studies in response to the return of GIs after World War II. In 1995, the School of General Studies was again reorganized as a full-fledged liberal arts college for non-traditional students (those who have had an academic break of one year or more, or are pursuing dual-degrees) and was fully integrated into Columbia's traditional undergraduate curriculum. The same year, the Division of Special Programs, later called the School of Continuing Education and now the School of Professional Studies, was established to reprise the former role of University Extension. While the School of Professional Studies only offered non-degree programs for lifelong learners and high school students in its earliest stages, it now offers degree programs in a diverse range of professional and inter-disciplinary fields.
In the aftermath of World War II, the discipline of international relations became a major scholarly focus of the university, and in response, the School of International and Public Affairs was founded in 1946, drawing upon the resources of the faculties of political science, economics, and history. The Columbia University Bicentennial was celebrated in 1954.
During the 1960s, student activism reached a climax with protests in the spring of 1968, when hundreds of students occupied buildings on campus. The incident forced the resignation of Columbia's president, Grayson Kirk, and the establishment of the University Senate.
Though several schools in the university had admitted women for years, Columbia College first admitted women in the fall of 1983, after a decade of failed negotiations with Barnard College, the all-female institution affiliated with the university, to merge the two schools. Barnard College still remains affiliated with Columbia, and all Barnard graduates are issued diplomas signed by the presidents of Columbia University and Barnard College.
During the late 20th century, the university underwent significant academic, structural, and administrative changes as it developed into a major research university. For much of the 19th century, the university consisted of decentralized and separate faculties specializing in Political Science, Philosophy, and Pure Science. In 1979, these faculties were merged into the Graduate School of Arts and Sciences. In 1991, the faculties of Columbia College, the School of General Studies, the Graduate School of Arts and Sciences, the School of the Arts, and the School of Professional Studies were merged into the Faculty of Arts and Sciences, leading to the academic integration and centralized governance of these schools.
21st century.
Bollinger presidency.
Lee C. Bollinger became Columbia's 19th president in June 2002, succeeding George Rupp. Appointed in October 2001 after arriving from the University of Michigan, his presidency emphasized campus expansion, globalization, and science, while navigating national debates.
Key initiatives included the ambitious Manhattanville campus expansion into West Harlem, addressing critical space needs and aiming to build new academic facilities, especially for sciences. Bollinger prioritized globalization, launching the World Leaders Forum and aiming to increase international student numbers. He appointed key leaders like Jeffrey Sachs (Earth Institute), Alan Brinkley (Provost), Nicholas Lemann (Journalism), David Hirsch (Research), and Nicholas Dirks (Arts & Sciences), and planned a Neuroscience Institute.
Bollinger was the defendant in the Supreme Court's 2003 affirmative action cases ("Gratz" and "Grutter"), resulting in a split decision. He consistently defended free speech principles during campus controversies involving faculty and students.
The Manhattanville expansion plan progressed, entering environmental review and the city's land-use review process. Concerns about eminent domain grew [with Bollinger calling its potential use necessary to secure land for projects like the Greene Science Center, funded by a landmark $200 million gift.
The university publicly launched a record $4 billion capital campaign in September 2006. Financial aid was improved, eliminating loans for undergraduates from families earning under $50,000, supported by a major gift from trustee Gerry Lenfest.
Globalization efforts continued with the World Leaders Forum and the creation of the Committee on Global Thought, chaired by Joseph Stiglitz. Columbia faculty received multiple Nobel Prizes: Richard Axel and Linda Buck (Medicine, 2004), Edmund Phelps (Economics, 2006), and Orhan Pamuk (Literature, 2006). Václav Havel joined the faculty.
Controversy erupted over a planned 2006 invitation to Iranian President Ahmadinejad, which was ultimately canceled due to logistical and security issues. Later that year, a campus event featuring Minuteman Project speakers was disrupted by protesters. Bollinger strongly condemned the disruption, reaffirming free speech principles while stating protesters do not have the right to silence speakers. Several students faced disciplinary action, and non-affiliated individuals involved were banned from campus.
The 2008 financial crisis impacted Columbia's endowment, but less than peers as only 13% of the operating budget reliant on the endowment (compared to higher percentages at peers like Harvard). The endowment recovered, hitting $8.2B in Oct 2013. Despite the downturn, Columbia pressed on with Manhattanville construction, receiving final state approval in June 2009. Major gifts fueled progress, including $400M from John Kluge upon his death, $50M from the Vagelos family for the Medical Center, $100M from Henry Kravis for the Business School, $30M from Gerry Lenfest for an Arts center, and $200M from Mortimer Zuckerman for the Mind, Brain, Behavior Institute.
Following the repeal of "Don't Ask, Don't Tell," the University Senate voted 51–17 to invite ROTC back after a 40-year absence, and Bollinger announced an agreement with the Navy. Columbia expanded its Global Centers network (Amman, Beijing, Mumbai, Paris, Nairobi, Istanbul, Santiago), aiming to increase global engagement and international student enrollment (11% in CC in 2011, targeted higher).
From 2014 to 2021, Columbia University pursued significant physical expansion, notably opening major facilities on the Manhattanville campus (ZMBBI, Lenfest Center, The Forum). Key strategic initiatives launched included the Knight First Amendment Institute, Columbia World Projects, and the new Columbia Climate School (2020). A $5 billion university capital campaign was launched (with a $1.5B A&S target), major gifts like $50M for A&S's Uris Hall renovation were secured, and the endowment grew significantly ($14.35B by mid-2021).
The COVID-19 pandemic starting March 2020 prompted remote operations, hiring/salary freezes, budget cuts, substantial borrowing (~$700M cited), and unpopular retirement contribution cuts, intensifying financial pressures. After Columbia math professor, Michael Thaddeus, claimed its ranking data was "inaccurate, dubious or highly misleading," Columbia was removed from the 2022 U.S. News rankings because the organization could not verify the submitted data. Citing concerns about their undue influence and oversimplification, Columbia's undergraduate schools withdrew from the U.S. News rankings in June 2023.
2023–present.
Beginning in fall 2023, escalating Columbia protests over the Gaza war, marked by debates on antisemitism, culminated in a major encampment, the police clearing of Hamilton Hall in April 2024, and President Minouche Shafik's subsequent resignation. Shafik was replaced by Katrina Armstrong as Acting President.
Following critical reports on antisemitism, campus conflict continued into 2025 as the second Trump administration threatened to revoke federal funding and demanded policy changes, prompting student expulsions, arrests of Palestinian students and alumni, and new university disciplinary measures. On March 21, 2025, university leaders agreed to the government's demands to "overhaul disciplinary processes, ban masks at protests, add 36 officers with the authority to make arrests and appoint a new senior vice provost to oversee academic programs focused on the Middle East" among other demands. The university's capitulation has not resulted in the withheld $400 million being restored. On March 28, 2025, Claire Shipman was named new Acting President.
Campus.
Morningside Heights.
The majority of Columbia's graduate and undergraduate studies are conducted in the Upper Manhattan neighborhood of Morningside Heights on Seth Low's late-19th century vision of a university campus where all disciplines could be taught at one location. The campus was designed along Beaux-Arts planning principles by the architects McKim, Mead & White. Columbia's main campus occupies more than six city blocks, or , in Morningside Heights, New York City, a neighborhood that contains a number of academic institutions. The university owns over 7,800 apartments in Morningside Heights, housing faculty, graduate students, and staff. Almost two dozen undergraduate dormitories (purpose-built or converted) are located on campus or in Morningside Heights. Columbia University has an extensive tunnel system, more than a century old, with the oldest portions predating the present campus. Some of these remain accessible to the public, while others have been cordoned off.
Butler Library is the largest in the Columbia University Libraries system and one of the largest buildings on the campus. It was completed in 1934 and renamed to Butler Library in 1946. , Columbia's library system includes over 15.0 million volumes, making it the eighth largest library system and fifth largest collegiate library system in the United States.
Several buildings on the Morningside Heights campus are listed on the National Register of Historic Places. Low Memorial Library, a National Historic Landmark and the centerpiece of the campus, is listed for its architectural significance. Philosophy Hall is listed as the site of the invention of FM radio. Also listed is Pupin Hall, another National Historic Landmark, which houses the physics and astronomy departments. Here the first experiments on the fission of uranium were conducted by Enrico Fermi. The uranium atom was split there ten days after the world's first atom-splitting in Copenhagen, Denmark. Other buildings listed include Casa Italiana, the Delta Psi, Alpha Chapter building of St. Anthony Hall, Earl Hall, and the buildings of the affiliated Union Theological Seminary.
A statue by sculptor Daniel Chester French called "Alma Mater" is centered on the front steps of Low Memorial Library. The statue represents a personification of the traditional image of the university as an "alma mater", or "nourishing mother", draped in an academic gown and seated on a throne. She wears a laurel wreath on her head and holds in her right hand a scepter capped by a King's Crown, a traditional symbol of the university. A book, representing learning, rests on her lap. The arms of her throne end in lamps, representing "Sapientia et Doctrina", or "Wisdom and Learning"; on the back of the throne is embossed an image of the seal of the university. The small hidden owl on the sculpture is also the subject of many Columbia legends, the main legend being that the first student in the freshmen class to find the hidden owl on the statue will be valedictorian, and that any subsequent Columbia male who finds it will marry a Barnard student, given that Barnard is a women's college.
"The Steps", alternatively known as "Low Steps" or the "Urban Beach", are a popular meeting area for Columbia students. The term refers to the long series of granite steps leading from the lower part of campus (South Field) to its upper terrace.
Other campuses.
In April 2007, the university purchased more than two-thirds of a site for a new campus in Manhattanville, an industrial neighborhood to the north of the Morningside Heights campus. Stretching from 125th Street to 133rd Street, Columbia Manhattanville houses buildings for Columbia's Business School, School of International and Public Affairs, Columbia School of the Arts, and the Jerome L. Greene Center for Mind, Brain, and Behavior, where research will occur on neurodegenerative diseases such as Parkinson's and Alzheimer's. The $7 billion expansion plan included demolishing all buildings, except three that are historically significant (the Studebaker Building, Prentis Hall, and the Nash Building), eliminating the existing light industry and storage warehouses, and relocating tenants in 132 apartments. Replacing these buildings created of space for the university. Community activist groups in West Harlem fought the expansion for reasons ranging from property protection and fair exchange for land, to residents' rights. Subsequent public hearings drew neighborhood opposition. , the State of New York's Empire State Development Corporation approved use of eminent domain, which, through declaration of Manhattanville's "blighted" status, gives governmental bodies the right to appropriate private property for public use. On May 20, 2009, the New York State Public Authorities Control Board approved the Manhanttanville expansion plan.
NewYork-Presbyterian Hospital is affiliated with the medical schools of both Columbia University and Cornell University. According to "U.S. News & World Report"s "2020–21 Best Hospitals Honor Roll and Medical Specialties Rankings", it is ranked fourth overall and second among university hospitals. Columbia's medical school has a strategic partnership with New York State Psychiatric Institute, and is affiliated with 19 other hospitals in the U.S. and four hospitals in other countries. Health-related schools are located at the Columbia University Medical Center, a campus located in the neighborhood of Washington Heights, fifty blocks uptown. Other teaching hospitals affiliated with Columbia through the NewYork-Presbyterian network include the Payne Whitney Clinic in Manhattan, and the Payne Whitney Westchester, a psychiatric institute located in White Plains, New York. On the northern tip of Manhattan island (in the neighborhood of Inwood), Columbia owns the Baker Field, which includes the Lawrence A. Wien Stadium as well as facilities for field sports, outdoor track, and tennis. There is a third campus on the west bank of the Hudson River, the Lamont–Doherty Earth Observatory and Earth Institute in Palisades, New York. A fourth is the Nevis Laboratories in Irvington, New York, for the study of particle and motion physics. A satellite site in Paris holds classes at Reid Hall.
Sustainability.
In 2006, the university established the Office of Environmental Stewardship to initiate, coordinate and implement programs to reduce the university's environmental footprint. The U.S. Green Building Council selected the university's Manhattanville plan for the Leadership in Energy and Environmental Design (LEED) Neighborhood Design pilot program.
Columbia has been rated "B+" by the 2011 College Sustainability Report Card for its environmental and sustainability initiatives.
According to the A. W. Kuchler U.S. potential natural vegetation types, Columbia University would have a dominant vegetation type of Appalachian Oak ("104") with a dominant vegetation form of Eastern Hardwood Forest ("25").
Transportation.
Columbia Transportation is the bus service of the university, operated by Academy Bus Lines. The buses are open to all Columbia faculty, students, Dodge Fitness Center members, and anyone else who holds a Columbia ID card. In addition, all TSC students can ride the buses.
In the New York City Subway, the train serves the university at 116th Street-Columbia University. The buses stop on Broadway while the stops on Amsterdam Avenue.
The main campus is primarily boxed off by the streets of Amsterdam Avenue, Broadway, 114th street, and 120th street, with some buildings, including Barnard College, located just outside the area. The nearest major highway is the Henry Hudson Parkway (NY 9A) to the west of the campus. It is located south of the George Washington Bridge.
Academics.
Undergraduate admissions and financial aid.
Columbia University received 60,551 applications for the class of 2025 (entering 2021) and a total of around 2,218 were admitted to the two schools for an overall acceptance rate of 3.66%. Columbia is a racially diverse school, with approximately 52% of all students identifying themselves as persons of color. Additionally, 50% of all undergraduates received grants from Columbia. The average grant size awarded to these students is $46,516. In 2015–2016, annual undergraduate tuition at Columbia was $50,526 with a total cost of attendance of $65,860 (including room and board). The college is need-blind for domestic applicants.
On April 11, 2007, Columbia University announced a $400 million donation from media billionaire alumnus John Kluge to be used exclusively for undergraduate financial aid. The donation is among the largest single gifts to higher education. However, this does not apply to international students, transfer students, visiting students, or students in the School of General Studies. In the fall of 2010, admission to Columbia's undergraduate colleges Columbia College and the Fu Foundation School of Engineering and Applied Science (also known as SEAS or Columbia Engineering) began accepting the Common Application. The policy change made Columbia one of the last major academic institutions and the last Ivy League university to switch to the Common Application.
Scholarships are also given to undergraduate students by the admissions committee. Designations include John W. Kluge Scholars, John Jay Scholars, C. Prescott Davis Scholars, Global Scholars, Egleston Scholars, and Science Research Fellows. Named scholars are selected by the admission committee from first-year applicants. According to Columbia, the first four designated scholars "distinguish themselves for their remarkable academic and personal achievements, dynamism, intellectual curiosity, the originality and independence of their thinking, and the diversity that stems from their different cultures and their varied educational experiences".
In 1919, Columbia established a student application process characterized by "The New York Times" as "the first modern college application". The application required a photograph of the applicant, the maiden name of the applicant's mother, and the applicant's religious background.
Organization.
Columbia University is an independent, privately supported, nonsectarian and not-for-profit institution of higher education. Its official corporate name is Trustees of Columbia University in the City of New York.
In 1754, the university's first charter was granted by King George II; however, its modern charter was first enacted in 1787 and last amended in 1810 by the New York State Legislature.
Columbia has four official undergraduate colleges: Columbia College, the liberal arts college offering the Bachelor of Arts degree; the Fu Foundation School of Engineering and Applied Science (also known as SEAS or Columbia Engineering), the engineering and applied science school offering the Bachelor of Science degree; the School of General Studies, the liberal arts college offering the Bachelor of Arts degree to non-traditional students undertaking full- or part-time study; and Barnard College. Barnard College is a women's liberal arts college and an academic affiliate in which students receive a Bachelor of Arts degree from Columbia University. Their degrees are signed by the presidents of Columbia University and Barnard College. Barnard students are also eligible to cross-register classes that are available through the Barnard Catalogue and alumnae can join the Columbia Alumni Association.
Joint degree programs are available through Union Theological Seminary, the Jewish Theological Seminary of America, and the Juilliard School. Teachers College and Barnard College are official faculties of the university; both colleges' presidents are deans under the university governance structure. The Columbia University Senate includes faculty and student representatives from Teachers College and Barnard College who serve two-year terms; all senators are accorded full voting privileges regarding matters impacting the entire university. Teachers College is an affiliated, financially independent graduate school with their own board of trustees. Pursuant to an affiliation agreement, Columbia is given the authority to confer "degrees and diplomas" to the graduates of Teachers College. The degrees are signed by presidents of Teachers College and Columbia University in a manner analogous to the university's other graduate schools. Columbia's General Studies school also has joint undergraduate programs available through University College London, Sciences Po, City University of Hong Kong, Trinity College Dublin, and the Juilliard School.
The university also has several Columbia Global Centers, in Amman, Beijing, Istanbul, Mumbai, Nairobi, Paris, Rio de Janeiro, Santiago, and Tunis.
International partnerships.
Columbia students can study abroad for a semester or a year at partner institutions such as Sciences Po, (EHESS), (ENS), Panthéon-Sorbonne University, King's College London, London School of Economics, University College London and the University of Warwick. Select students can study at either the University of Oxford or the University of Cambridge for a year if approved by both Columbia and either Oxford or Cambridge. Columbia also has a dual MA program with the Aga Khan University in London.
Rankings.
Columbia University is ranked 12th in the United States and seventh globally for 2023–2024 by "U.S. News & World Report". QS University Rankings listed Columbia as fifth in the United States. Ranked 15th among U.S. colleges for 2020 by "The Wall Street Journal" and "Times Higher Education", in recent years it has been ranked as high as second. Individual colleges and schools were also nationally ranked by "U.S. News & World Report" for its 2021 edition. Columbia Law School was ranked fourth, the Mailman School of Public Health fourth, the School of Social Work tied for third, Columbia Business School eighth, the College of Physicians and Surgeons tied for sixth for research (and tied for 31st for primary care), the School of Nursing tied for 11th in the master's program and tied for first in the doctorate nursing program, and the Fu Foundation School of Engineering and Applied Science (graduate) was ranked tied for 14th.
In 2021, Columbia was ranked seventh in the world (sixth in the United States) by "Academic Ranking of World Universities", sixth in the world by "U.S. News & World Report", 19th in the world by "QS World University Rankings", and 11th globally by "Times Higher Education World University Rankings". It was ranked in the first tier of American research universities, along with Harvard, MIT, and Stanford, in the 2019 report from the Center for Measuring University Performance. Columbia's Graduate School of Architecture, Planning and Preservation was ranked the second most admired graduate program by Architectural Record in 2020.
In 2011, the ranked Columbia third best university for forming CEOs in the US and 12th worldwide.
In 2025, Columbia was ranked 250 out of 257 top colleges in "Free Speech Rankings" by the Foundation for Individual Rights and Expression and "College Pulse", after ranking 214 of 248 in 2024 and at the bottom of 203 in 2022/2023.
In 2024 and 2025, Columbia received a D on the "Campus Antisemitism Report Card" of the Anti-Defamation League, which the advocacy organization first launched in spring 2024, in the lead-up to and in the context of campus conflict over the 2024 Columbia University pro-Palestinian campus occupations.
Research.
Columbia is classified among "R1: Doctoral Universities – Very high research activity". Columbia was the first North American site where the uranium atom was split. The College of Physicians and Surgeons played a central role in developing the modern understanding of neuroscience with the publication of "Principles of Neural Science", described by historian of science Katja Huenther as the "neuroscience 'bible' ". The book was written by a team of Columbia researchers that included Nobel Prize winner Eric Kandel, James H. Schwartz, and Thomas Jessell. Columbia was the birthplace of FM radio and the laser. The first brain-computer interface capable of translating brain signals into speech was developed by neuroengineers at Columbia. The MPEG-2 algorithm of transmitting high quality audio and video over limited bandwidth was developed by Dimitris Anastassiou, a Columbia professor of electrical engineering. Biologist Martin Chalfie was the first to introduce the use of Green Fluorescent Protein (GFP) in labeling cells in intact organisms. Other inventions and products related to Columbia include Sequential Lateral Solidification (SLS) technology for making LCDs, System Management Arts (SMARTS), Session Initiation Protocol (SIP) (which is used for audio, video, chat, instant messaging and whiteboarding), pharmacopeia, Macromodel (software for computational chemistry), a new and better recipe for glass concrete, Blue LEDs, and Beamprop (used in photonics).
Columbia scientists have been credited with about 175 new inventions in the health sciences each year. More than 30 pharmaceutical products based on discoveries and inventions made at Columbia reached the market. These include Remicade (for arthritis), Reopro (for blood clot complications), Xalatan (for glaucoma), Benefix, Latanoprost (a glaucoma treatment), shoulder prosthesis, homocysteine (testing for cardiovascular disease), and Zolinza (for cancer therapy). Columbia Technology Ventures (formerly Science and Technology Ventures), , manages some 600 patents and more than 250 active license agreements. Patent-related deals earned Columbia more than $230 million in the 2006 fiscal year, according to the university, more than any university in the world. Columbia owns many unique research facilities, such as the Columbia Institute for Tele-Information dedicated to telecommunications and the Goddard Institute for Space Studies, which is an astronomical observatory affiliated with NASA.
Military and veteran enrollment.
Columbia is a long-standing participant of the United States Department of Veterans Affairs Yellow Ribbon Program, allowing eligible veterans to pursue a Columbia undergraduate degree regardless of socioeconomic status for over 70 years. As a part of the Eisenhower Leader Development Program (ELDP) in partnership with the United States Military Academy at West Point, Columbia is the only school in the Ivy League to offer a graduate degree program in organizational psychology to aid military officers in tactical decision making and strategic management.
Awards.
Several prestigious awards are administered by Columbia University, most notably the Pulitzer Prize and the Bancroft Prize in history. Other prizes, which are awarded by the Graduate School of Journalism, include the Alfred I. duPont–Columbia University Award, the National Magazine Awards, the Maria Moors Cabot Prizes, the John Chancellor Award, and the Lukas Prizes, which include the J. Anthony Lukas Book Prize and Mark Lynton History Prize. The university also administers the Louisa Gross Horwitz Prize, which is considered an important precursor to the Nobel Prize, 55 of its 117 recipients having gone on to win either a Nobel Prize in Physiology or Medicine or Nobel Prize in Chemistry as of October 2024; the W. Alden Spencer Award; the Vetlesen Prize, which is known as the Nobel Prize of geology; the Japan-U.S. Friendship Commission Prize for the Translation of Japanese Literature, the oldest such award; the Edwin Howard Armstrong award; the Calderone Prize in public health; and the Ditson Conductor's Award.
Student life.
In 2020, Columbia University's student population was 31,455 (8,842 students in undergraduate programs and 22,613 in postgraduate programs), with 45% of the student population identifying themselves as a minority. Twenty-six percent of students at Columbia have family incomes below $60,000. 16% of students at Columbia receive Federal Pell Grants, which mostly go to students whose family incomes are below $40,000. Seventeen percent of students are the first member of their family to attend a four-year college.
On-campus housing is guaranteed for all four years as an undergraduate. Columbia College and the Fu Foundation School of Engineering and Applied Science (also known as SEAS or Columbia Engineering) share housing in the on-campus residence halls. First-year students usually live in one of the large residence halls situated around South Lawn: Carman Hall, Furnald Hall, Hartley Hall, John Jay Hall, or Wallach Hall (originally Livingston Hall). Upperclassmen participate in a room selection process, wherein students can pick to live in a mix of either corridor- or apartment-style housing with their friends. The Columbia University School of General Studies, Barnard College and graduate schools have their own apartment-style housing in the surrounding neighborhood.
Columbia University is home to many fraternities, sororities, and co-educational Greek organizations. Approximately 10–15% of undergraduate students are associated with Greek life. Many Barnard women also join Columbia sororities. There has been a Greek presence on campus since the establishment in 1836 of the Delta chapter of Alpha Delta Phi.
Publications.
The "Columbia Daily Spectator" is the nation's second-oldest continuously operating daily student newspaper. "The Blue and White" is a monthly literary magazine established in 1890 that discusses campus life and local politics. "Bwog", originally an offshoot of "The Blue and White" but now fully independent, is an online campus news and entertainment source. "The Morningside Post" is a student-run multimedia news publication.
Political publications include "The Current", a journal of politics, culture and Jewish Affairs; the "Columbia Political Review", the multi-partisan political magazine of the Columbia Political Union; and "AdHoc", which denotes itself as the "progressive" campus magazine and deals largely with local political issues and arts events.
"Columbia Magazine" is the alumni magazine of Columbia, serving all 340,000+ of the university's alumni. Arts and literary publications include "The Columbia Review", the nation's oldest college literary magazine; "Surgam", the literary magazine of The Philolexian Society; "Quarto", Columbia University's official undergraduate literary magazine; "4x4", a student-run alternative to "Quarto"; "Columbia", a nationally regarded literary journal; the "Columbia Journal of Literary Criticism"; and "The Mobius Strip", an online arts and literary magazine. "Inside New York" is an annual guidebook to New York City, written, edited, and published by Columbia undergraduates. Through a distribution agreement with Columbia University Press, the book is sold at major retailers and independent bookstores.
Columbia is home to numerous undergraduate academic publications. The "Columbia Undergraduate Science Journal" prints original science research in its two annual publications. The "Journal of Politics & Society" is a journal of undergraduate research in the social sciences; "Publius" is an undergraduate journal of politics established in 2008 and published biannually; the "Columbia East Asia Review" allows undergraduates throughout the world to publish original work on China, Japan, Korea, Tibet, and Vietnam and is supported by the Weatherhead East Asian Institute; "The Birch" is an undergraduate journal of Eastern European and Eurasian culture that is the first national student-run journal of its kind; the "Columbia Economics Review" is the undergraduate economic journal on research and policy supported by the Columbia Economics Department; and the "Columbia Science Review" is a science magazine that prints general interest articles and faculty profiles.
Humor publications on Columbia's campus include "The Fed", a triweekly satire and investigative newspaper, and the "Jester of Columbia." Other publications include "The Columbian", the undergraduate colleges' annually published yearbook; the "Gadfly", a biannual journal of popular philosophy produced by undergraduates; and "Rhapsody in Blue", an undergraduate urban studies magazine. Professional journals published by academic departments at Columbia University include "Current Musicology" and "The Journal of Philosophy". During the spring semester, graduate students in the Journalism School publish "The Bronx Beat", a bi-weekly newspaper covering the South Bronx.
Founded in 1961 under the auspices of Columbia University's Graduate School of Journalism, the "Columbia Journalism Review" (CJR) examines day-to-day press performance as well as the forces that affect that performance. The magazine is published six times a year.
Former publications include the "Columbia University Forum", a review of literature and cultural affairs distributed for free to alumni.
Broadcasting.
Columbia is home to two pioneers in undergraduate campus radio broadcasting, WKCR-FM and CTV. Many undergraduates are also involved with Barnard's radio station, WBAR. WKCR, the student run radio station that broadcasts to the Tri-state area, claims to be the oldest FM radio station in the world, owing to the university's affiliation with Edwin Howard Armstrong. The station has its studios on the second floor of Alfred Lerner Hall on the Morningside campus with its main transmitter tower at 4 Times Square in Midtown Manhattan. Columbia Television (CTV) is the nation's second oldest student television station and the home of CTV News, a weekly live news program produced by undergraduate students.
Debate and Model UN.
The Philolexian Society is a literary and debating club founded in 1802, making it the oldest student group at Columbia, as well as the third oldest collegiate literary society in the country. The society annually administers the Joyce Kilmer Memorial Bad Poetry Contest. The Columbia Parliamentary Debate Team competes in tournaments around the country as part of the American Parliamentary Debate Association, and hosts both high school and college tournaments on Columbia's campus, as well as public debates on issues affecting the university.
The Columbia International Relations Council and Association (CIRCA), oversees Columbia's Model United Nations activities. CIRCA hosts college and high school Model UN conferences, hosts speakers influential in international politics to speak on campus, and trains students from underprivileged schools in New York in Model UN.
Technology and entrepreneurship.
Columbia is a top supplier of young engineering entrepreneurs for New York City. Over the past 20 years, graduates of Columbia established over 100 technology companies.
The Columbia University Organization of Rising Entrepreneurs (CORE) was founded in 1999. The student-run group aims to foster entrepreneurship on campus. Each year CORE hosts dozens of events, including talks, #StartupColumbia, a conference and venture competition for $250,000, and Ignite@CU, a weekend for undergrads interested in design, engineering, and entrepreneurship. Notable speakers include Peter Thiel, Jack Dorsey, Alexis Ohanian, Drew Houston, and Mark Cuban. As of 2006, CORE had awarded graduate and undergraduate students over $100,000 in seed capital.
CampusNetwork, an on-campus social networking site called Campus Network that preceded Facebook, was created and popularized by Columbia engineering student Adam Goldberg in 2003. Mark Zuckerberg later asked Goldberg to join him in Palo Alto to work on Facebook, but Goldberg declined the offer. The Fu Foundation School of Engineering and Applied Science offers a minor in Technical Entrepreneurship through its Center for Technology, Innovation, and Community Engagement. SEAS' entrepreneurship activities focus on community building initiatives in New York and worldwide, made possible through partners such as Microsoft Corporation.
On June 14, 2010, Mayor Michael R. Bloomberg launched the NYC Media Lab to promote innovations in New York's media industry. Situated at the New York University Tandon School of Engineering, the lab is a consortium of Columbia University, New York University, and New York City Economic Development Corporation acting to connect companies with universities in new technology research. The Lab is modeled after similar ones at MIT and Stanford, and was established with a $250,000 grant from the New York City Economic Development Corporation.
World Leaders Forum.
Established in 2003 by university president Lee C. Bollinger, the World Leaders Forum at Columbia University provides the opportunity for students and faculty to listen to world leaders in government, religion, industry, finance, and academia.
Past forum speakers include former president of the United States Bill Clinton, the prime minister of India Atal Bihari Vajpayee, former president of Ghana John Agyekum Kufuor, president of Afghanistan Hamid Karzai, prime minister of Russia Vladimir Putin, president of the Republic of Mozambique Joaquim Alberto Chissano, president of the Republic of Bolivia Carlos Diego Mesa Gisbert, president of the Republic of Romania Ion Iliescu, president of the Republic of Latvia Vaira Vīķe-Freiberga, the first female president of Finland Tarja Halonen, President Yudhoyono of Indonesia, President Pervez Musharraf of the Islamic Republic of Pakistan, Iraq President Jalal Talabani, the 14th Dalai Lama, president of the Islamic Republic of Iran Mahmoud Ahmadinejad, financier George Soros, Mayor of New York City Michael R. Bloomberg, President Václav Klaus of the Czech Republic, President Cristina Fernández de Kirchner of Argentina, former Secretary-General of the United Nations Kofi Annan, and Al Gore.
Other.
The Columbia University Orchestra was founded by composer Edward MacDowell in 1896, and is the oldest continually operating university orchestra in the United States. Undergraduate student composers at Columbia may choose to become involved with Columbia New Music, which sponsors concerts of music written by undergraduate students from all of Columbia's schools. The Notes and Keys, the oldest a cappella group at Columbia, was founded in 1909. There are a number of performing arts groups at Columbia dedicated to producing student theater, including the Columbia Players, King's Crown Shakespeare Troupe (KCST), Columbia Musical Theater Society (CMTS), NOMADS (New and Original Material Authored and Directed by Students), LateNite Theatre, Columbia University Performing Arts League (CUPAL), Black Theatre Ensemble (BTE), sketch comedy group Chowdah, and improvisational troupes Alfred and Fruit Paunch.
The Columbia Queer Alliance is the central Columbia student organization that represents the bisexual, lesbian, gay, transgender, and questioning student population. It is the oldest gay student organization in the world, founded as the Student Homophile League in 1967 by students including lifelong activist Stephen Donaldson.
Columbia University campus military groups include the U.S. Military Veterans of Columbia University and Advocates for Columbia ROTC. In the 2005–06 academic year, the Columbia Military Society, Columbia's student group for ROTC cadets and Marine officer candidates, was renamed the Hamilton Society for "students who aspire to serve their nation through the military in the tradition of Alexander Hamilton".
Columbia has several secret societies, including St. Anthony Hall, which was founded at the university in 1847, and two senior societies, the Nacoms and Sachems.
Athletics.
A member institution of the National Collegiate Athletic Association (NCAA) in Division I FCS, Columbia fields varsity teams in 29 sports and is a member of the Ivy League. The football Lions play home games at the 17,000-seat Robert K. Kraft Field at Lawrence A. Wien Stadium. The Baker Athletics Complex also includes facilities for baseball, softball, soccer, lacrosse, field hockey, tennis, track, and rowing, as well as the new Campbell Sports Center, which opened in January 2013. The basketball, fencing, swimming & diving, volleyball, and wrestling programs are based at the Dodge Physical Fitness Center on the main campus.
Former students include Baseball Hall of Famers Lou Gehrig and Eddie Collins, football Hall of Famer Sid Luckman, Marcellus Wiley, and world champion women's weightlifter Karyn Marshall. On May 17, 1939, fledgling NBC broadcast a doubleheader between the Columbia Lions and the Princeton Tigers at Columbia's Baker Field, making it the first televised regular athletic event in history.
Columbia University participated in multiple firsts within collegiate athletics. The football program is best known for its record of futility set during the 1980s: between 1983 and 1988, the team lost 44 games in a row, which is still the record for the NCAA Football Championship Subdivision. The streak was broken on October 8, 1988, with a 16–13 victory over arch-rival Princeton University. That was the Lions' first victory at Wien Stadium, which had been opened during the losing streak and was already four years old. A new tradition has developed with the Liberty Cup. The Liberty Cup is awarded annually to the winner of the football game between Fordham and Columbia Universities, two of the only three NCAA Division I football teams in New York City.
Traditions.
The Varsity Show.
The Varsity Show is one of the oldest traditions at Columbia. Founded in 1893 as a fundraiser for the university's fledgling athletic teams, the Varsity Show now draws together the entire Columbia undergraduate community for a series of performances every April. Dedicated to producing a unique full-length musical that skewers and satirizes many dubious aspects of life at Columbia, the Varsity Show is written and performed exclusively by university undergraduates. Various renowned playwrights, composers, authors, directors, and actors have contributed to the Varsity Show, either as writers or performers, while students at Columbia, including Richard Rodgers, Oscar Hammerstein II, Lorenz Hart, Herman J. Mankiewicz, I. A. L. Diamond, Herman Wouk, Greta Gerwig, and Kate McKinnon.
Notable past shows include "Fly With Me" (1920), "The" "Streets of New York" (1948), "The Sky's the Limit" (1954), and "Angels at Columbia" (1994). In particular, "Streets of New York", after having been revived three times, opened off-Broadway in 1963 and was awarded a 1964 Drama Desk Award. "The Mischief Maker" (1903), written by Edgar Allan Woolf and Cassius Freeborn, premiered at Madison Square Garden in 1906 as "Mam'zelle Champagne".
Tree Lighting and Yule Log ceremonies.
The campus Tree Lighting ceremony was inaugurated in 1998. It celebrates the illumination of the medium-sized trees lining College Walk in front of Kent Hall and Hamilton Hall on the east end and Dodge Hall and Pulitzer Hall on the west, just before finals week in early December. The lights remain on until February 28. Students meet at the sundial for free hot chocolate, performances by "a cappella" groups, and speeches by the university president and a guest.
Immediately following the College Walk festivities is one of Columbia's older holiday traditions, the lighting of the Yule Log. The Christmas ceremony dates to a period prior to the American Revolutionary War, but lapsed before being revived by President Nicholas Murray Butler in 1910. A troop of students dressed as Continental Army soldiers carry the eponymous log from the sundial to the lounge of John Jay Hall, where it is lit amid the singing of seasonal carols. The Christmas ceremony is accompanied by a reading of "A Visit From St. Nicholas" by Clement Clarke Moore and "Yes, Virginia, There is a Santa Claus" by Francis Pharcellus Church.
Notable people.
Alumni.
The university has graduated many notable alumni, including five Founding Fathers of the United States, an author of the United States Constitution and a member of the Committee of Five. Three United States presidents have attended Columbia, as well as ten Justices of the Supreme Court of the United States, including three Chief Justices. , 125 Pulitzer Prize winners and 39 Oscar winners have attended Columbia. , there were 101 National Academy members who were alumni.
In a 2016 ranking of universities worldwide with respect to living graduates who are billionaires, Columbia ranked second, after Harvard.
Former U.S. Presidents Theodore Roosevelt and Franklin Delano Roosevelt attended the law school. Other political figures educated at Columbia include former U.S. President Barack Obama, Associate Justice of the U.S. Supreme Court Ruth Bader Ginsburg, former U.S. Secretary of State Madeleine Albright, former chairman of the U.S. Federal Reserve Bank Alan Greenspan, U.S. Attorney General Eric Holder, and U.S. Solicitor General Donald Verrilli Jr. The university has also educated 29 foreign heads of state, including president of Georgia Mikheil Saakashvili, president of East Timor José Ramos-Horta, president of Estonia Toomas Hendrik Ilves and other historical figures such as Wellington Koo, Radovan Karadžić, Gaston Eyskens, and T. V. Soong. One of the founding fathers of modern India and the prime architect of the Constitution of India, B. R. Ambedkar, was an alumnus.
Alumni of Columbia have occupied top positions in Wall Street and the rest of the business world. Notable members of the Astor family attended Columbia, while other business graduates include investor Warren Buffett, former CEO of PBS and NBC Lawrence K. Grossman, chairman of Walmart S. Robson Walton, Bain Capital Co-Managing Partner, Jonathan Lavine, Thomson Reuters CEO Tom Glocer, New York Stock Exchange president Lynn Martin, and AllianceBernstein Chairman and CEO Lewis A. Sanders. CEO's of top Fortune 500 companies include James P. Gorman of Morgan Stanley, Robert J. Stevens of Lockheed Martin, Philippe Dauman of Viacom, Robert Bakish of Paramount Global, Ursula Burns of Xerox, Devin Wenig of EBay, Vikram Pandit of Citigroup, Ralph Izzo of Public Service Enterprise Group, Gail Koziara Boudreaux of Anthem, and Frank Blake of The Home Depot. Notable labor organizer and women's educator Louise Leonard McLaren received her degree of Master of Arts from Columbia.
In science and technology, Columbia alumni include: founder of IBM Herman Hollerith; inventor of FM radio Edwin Armstrong; Francis Mechner; integral in development of the nuclear submarine Hyman Rickover; founder of Google China Kai-Fu Lee; scientists Stephen Jay Gould, Robert Millikan, Helium–neon laser inventor Ali Javan and Mihajlo Pupin; chief-engineer of the New York City Subway, William Barclay Parsons; philosophers Irwin Edman and Robert Nozick; economist Milton Friedman; psychologist Harriet Babcock; archaeologist Josephine Platner Shear; and sociologists Lewis A. Coser and Rose Laub Coser.
Many Columbia alumni have gone on to renowned careers in the arts, including composers Richard Rodgers, Oscar Hammerstein II, Lorenz Hart, and Art Garfunkel; and painter Georgia O'Keeffe. Five United States Poet Laureates received their degrees from Columbia. Columbia alumni have made an indelible mark in the field of American poetry and literature, with such people as Jack Kerouac and Allen Ginsberg, pioneers of the Beat Generation; and Langston Hughes and Zora Neale Hurston, seminal figures in the Harlem Renaissance, all having attended the university. Other notable writers who attended Columbia include authors Isaac Asimov, J.D. Salinger, Upton Sinclair, Ursula K. Le Guin, Danielle Valore Evans, and Hunter S. Thompson. In architecture, William Lee Stoddart, a prolific architect of U.S. East Coast hotels, is an alumnus.
University alumni have also been very prominent in the film industry, with 33 alumni and former students winning a combined 43 Academy Awards (). Some notable Columbia alumni that have gone on to work in film include directors Sidney Lumet ("12 Angry Men") and Kathryn Bigelow ("The Hurt Locker"), screenwriters Howard Koch ("Casablanca") and Joseph L. Mankiewicz ("All About Eve"), and actors James Cagney, Ed Harris and Timothée Chalamet.
Faculty.
As of 2021, Columbia employs 4,381 faculty, including 70 members of the National Academy of Sciences, 178 members of the American Academy of Arts and Sciences, and 65 members of the National Academy of Medicine. In total, the Columbia faculty has included 52 Nobel laureates, 12 National Medal of Science recipients, and 32 National Academy of Engineering members.
Columbia University faculty played particularly important roles during World War II and the creation of the New Deal under President Franklin D. Roosevelt, who attended Columbia Law School. The three core members of Roosevelt's Brain Trust: Adolf A. Berle, Raymond Moley, and Rexford Tugwell, were law professors at Columbia. The Statistical Research Group, which used statistics to analyze military problems during World War II, was composed of Columbia researchers and faculty including George Stigler and Milton Friedman. Columbia faculty and researchers, including Enrico Fermi, Leo Szilard, Eugene T. Booth, John R. Dunning, George B. Pegram, Walter Zinn, Chien-Shiung Wu, Francis G. Slack, Harold Urey, Herbert L. Anderson, and Isidor Isaac Rabi, also played a significant role during the early phases of the Manhattan Project.
Following the rise of Nazi Germany, the exiled Institute for Social Research at Goethe University Frankfurt would affiliate itself with Columbia from 1934 to 1950. It was during this period that thinkers including Theodor Adorno, Max Horkheimer, and Herbert Marcuse wrote and published some of the most seminal works of the Frankfurt School, including "Reason and Revolution", "Dialectic of Enlightenment", and "Eclipse of Reason". Professors Edward Said, author of "Orientalism", and Gayatri Spivak are generally considered as founders of the field of postcolonialism; other professors that have significantly contributed to the field include Hamid Dabashi and Joseph Massad. The works of professors Kimberlé Crenshaw, Patricia J. Williams, and Kendall Thomas were foundational to the field of critical race theory.
Columbia and its affiliated faculty have also made significant contributions to the study of religion. The affiliated Union Theological Seminary is a center of liberal Christianity in the United States, having served as the birthplace of Black theology through the efforts of faculty including James H. Cone and Cornel West, and Womanist theology, through the works of Katie Cannon, Emilie Townes, and Delores S. Williams. Likewise, the Jewish Theological Seminary of America was the birthplace of Conservative Judaism movement in the United States, which was founded and led by faculty members including Solomon Schechter, Alexander Kohut, and Louis Ginzberg in the early 20th century, and is a major center for Jewish studies in general.
Other schools of thought in the humanities Columbia professors made significant contributions toward include the Dunning School, founded by William Archibald Dunning; the anthropological schools of historical particularism and cultural relativism, founded by Franz Boas; and functional psychology, whose founders and proponents include John Dewey, James McKeen Cattell, Edward L. Thorndike, and Robert S. Woodworth.
Notable figures that have served as the president of Columbia University include 34th President of the United States Dwight D. Eisenhower, 4th Vice President of the United States George Clinton, Founding Father and U.S. Senator from Connecticut William Samuel Johnson, Nobel Peace Prize laureate Nicholas Murray Butler, and First Amendment scholar Lee Bollinger.
Notable Columbia University faculty include Zbigniew Brzezinski, Sonia Sotomayor, Kimberlé Crenshaw, Lee Bollinger, Franz Boas, Margaret Mead, Edward Sapir, John Dewey, Charles A. Beard, Max Horkheimer, Herbert Marcuse, Edward Said, Gayatri Chakravorty Spivak, Orhan Pamuk, Edwin Howard Armstrong, Enrico Fermi, Chien-Shiung Wu, Tsung-Dao Lee, Jack Steinberger, Joachim Frank, Joseph Stiglitz, Jeffrey Sachs, Robert Mundell, Thomas Hunt Morgan, Eric Kandel, Richard Axel, and Andrei Okounkov.
|
6312
|
7903804
|
https://en.wikipedia.org/wiki?curid=6312
|
Cell wall
|
A cell wall is a structural layer that surrounds some cell types, found immediately outside the cell membrane. It can be tough, flexible, and sometimes rigid. Primarily, it provides the cell with structural support, shape, protection, and functions as a selective barrier. Another vital role of the cell wall is to help the cell withstand osmotic pressure and mechanical stress. While absent in many eukaryotes, including animals, cell walls are prevalent in other organisms such as fungi, algae and plants, and are commonly found in most prokaryotes, with the exception of mollicute bacteria.
The composition of cell walls varies across taxonomic groups, species, cell type, and the cell cycle. In land plants, the primary cell wall comprises polysaccharides like cellulose, hemicelluloses, and pectin. Often, other polymers such as lignin, suberin or cutin are anchored to or embedded in plant cell walls. Algae exhibit cell walls composed of glycoproteins and polysaccharides, such as carrageenan and agar, distinct from those in land plants. Bacterial cell walls contain peptidoglycan, while archaeal cell walls vary in composition, potentially consisting of glycoprotein S-layers, pseudopeptidoglycan, or polysaccharides. Fungi possess cell walls constructed from the polymer chitin, specifically N-acetylglucosamine. Diatoms have a unique cell wall composed of biogenic silica.
History.
A plant cell wall was first observed and named (simply as a "wall") by Robert Hooke in 1665. However, "the dead excrusion product of the living protoplast" was forgotten, for almost three centuries, being the subject of scientific interest mainly as a resource for industrial processing or in relation to animal or human health.
In 1804, Karl Rudolphi and J.H.F. Link proved that cells had independent cell walls. Before, it had been thought that cells shared walls and that fluid passed between them this way.
The mode of formation of the cell wall was controversial in the 19th century. Hugo von Mohl (1853, 1858) advocated the idea that the cell wall grows by apposition. Carl Nägeli (1858, 1862, 1863) believed that the growth of the wall in thickness and in area was due to a process termed intussusception. Each theory was improved in the following decades: the apposition (or lamination) theory by Eduard Strasburger (1882, 1889), and the intussusception theory by Julius Wiesner (1886).
In 1930, Ernst Münch coined the term "apoplast" in order to separate the "living" symplast from the "dead" plant region, the latter of which included the cell wall.
By the 1980s, some authors suggested replacing the term "cell wall", particularly as it was used for plants, with the more precise term "extracellular matrix", as used for animal cells, but others preferred the older term.
Properties.
Cell walls serve similar purposes in those organisms that possess them. They may give cells rigidity and strength, offering protection against mechanical stress. The chemical composition and mechanical properties of the cell wall are linked with plant cell growth and morphogenesis. In multicellular organisms, they permit the organism to build and hold a definite shape. Cell walls also limit the entry of large molecules that may be toxic to the cell. They further permit the creation of stable osmotic environments by preventing osmotic lysis and helping to retain water. Their composition, properties, and form may change during the cell cycle and depend on growth conditions.
Rigidity of cell walls.
In most cells, the cell wall is flexible, meaning that it will bend rather than holding a fixed shape, but has considerable tensile strength. The apparent rigidity of primary plant tissues is enabled by cell walls, but is not due to the walls' stiffness. Hydraulic turgor pressure creates this rigidity, along with the wall structure. The flexibility of the cell walls is seen when plants wilt, so that the stems and leaves begin to droop, or in seaweeds that bend in water currents. As John Howland explains
The apparent rigidity of the cell wall thus results from inflation of the cell contained within. This inflation is a result of the passive uptake of water.
In plants, a secondary cell wall is a thicker additional layer of cellulose which increases wall rigidity. Additional layers may be formed by lignin in xylem cell walls, or suberin in cork cell walls. These compounds are rigid and waterproof, making the secondary wall stiff. Both wood and bark cells of trees have secondary walls. Other parts of plants such as the leaf stalk may acquire similar reinforcement to resist the strain of physical forces.
Permeability.
The primary cell wall of most plant cells is freely permeable to small molecules including small proteins, with size exclusion estimated to be . The pH is an important factor governing the transport of molecules through cell walls.
Evolution.
Cell walls evolved independently in many groups.
The photosynthetic eukaryotes (so-called plant and algae) is one group with cellulose cell walls, where the cell wall is closely related to the evolution of multicellularity, terrestrialization and vascularization. The CesA cellulose synthase evolved in "Cyanobacteria" and was part of Archaeplastida since endosymbiosis; secondary endosymbiosis events transferred it (with the arabinogalactan proteins) further into brown algae and oomycetes. Plants later evolved various genes from CesA, including the Csl (cellulose synthase-like) family of proteins and additional Ces proteins. Combined with the various glycosyltransferases (GT), they enable more complex chemical structures to be built.
Fungi use a chitin-glucan-protein cell wall. They share the 1,3-β-glucan synthesis pathway with plants, using homologous GT48 family 1,3-Beta-glucan synthases to perform the task, suggesting that such an enzyme is very ancient within the eukaryotes. Their glycoproteins are rich in mannose. The cell wall might have evolved to deter viral infections. Proteins embedded in cell walls are variable, contained in tandem repeats subject to homologous recombination. An alternative scenario is that fungi started with a chitin-based cell wall and later acquired the GT-48 enzymes for the 1,3-β-glucans via horizontal gene transfer. The pathway leading to 1,6-β-glucan synthesis is not sufficiently known in either case.
Plant cell walls.
The walls of plant cells must have sufficient tensile strength to withstand internal osmotic pressures of several times atmospheric pressure that result from the difference in solute concentration between the cell interior and external solutions. Plant cell walls vary from 0.1 to several μm in thickness.
Layers.
Up to three strata or layers may be found in plant cell walls:
Composition.
In the primary (growing) plant cell wall, the major carbohydrates are cellulose, hemicellulose and pectin. The cellulose microfibrils are linked via hemicellulosic tethers to form the cellulose-hemicellulose network, which is embedded in the pectin matrix. The most common hemicellulose in the primary cell wall is xyloglucan. In grass cell walls, xyloglucan and pectin are reduced in abundance and partially replaced by glucuronoarabinoxylan, another type of hemicellulose. Primary cell walls characteristically extend (grow) by a mechanism called acid growth, mediated by expansins, extracellular proteins activated by acidic conditions that modify the hydrogen bonds between pectin and cellulose. This functions to increase cell wall extensibility. The outer part of the primary cell wall of the plant epidermis is usually impregnated with cutin and wax, forming a permeability barrier known as the plant cuticle.
Secondary cell walls contain a wide range of additional compounds that modify their mechanical properties and permeability. The major polymers that make up wood (largely secondary cell walls) include:
Additionally, structural proteins (1-5%) are found in most plant cell walls; they are classified as hydroxyproline-rich glycoproteins (HRGP), arabinogalactan proteins (AGP), glycine-rich proteins (GRPs), and proline-rich proteins (PRPs). Each class of glycoprotein is defined by a characteristic, highly repetitive protein sequence. Most are glycosylated, contain hydroxyproline (Hyp) and become cross-linked in the cell wall. These proteins are often concentrated in specialized cells and in cell corners. Cell walls of the epidermis may contain cutin. The Casparian strip in the endodermis roots and cork cells of plant bark contain suberin. Both cutin and suberin are polyesters that function as permeability barriers to the movement of water. The relative composition of carbohydrates, secondary compounds and proteins varies between plants and between the cell type and age. Plant cells walls also contain numerous enzymes, such as hydrolases, esterases, peroxidases, and transglycosylases, that cut, trim and cross-link wall polymers.
Secondary walls - especially in grasses - may also contain microscopic silica crystals, which may strengthen the wall and protect it from herbivores.
Cell walls in some plant tissues also function as storage deposits for carbohydrates that can be broken down and resorbed to supply the metabolic and growth needs of the plant. For example, endosperm cell walls in the seeds of cereal grasses, nasturtium
and other species, are rich in glucans and other polysaccharides that are readily digested by enzymes during seed germination to form simple sugars that nourish the growing embryo.
Formation.
The middle lamella is laid down first, formed from the cell plate during cytokinesis, and the primary cell wall is then deposited inside the middle lamella. The actual structure of the cell wall is not clearly defined and several models exist - the covalently linked cross model, the tether model, the diffuse layer model and the stratified layer model. However, the primary cell wall, can be defined as composed of cellulose microfibrils aligned at all angles. Cellulose microfibrils are produced at the plasma membrane by the cellulose synthase complex, which is proposed to be made of a hexameric rosette that contains three cellulose synthase catalytic subunits for each of the six units. Microfibrils are held together by hydrogen bonds to provide a high tensile strength. The cells are held together and share the gelatinous membrane (the middle lamella), which contains magnesium and calcium pectates (salts of pectic acid). Cells interact though plasmodesmata, which are inter-connecting channels of cytoplasm that connect to the protoplasts of adjacent cells across the cell wall.
In some plants and cell types, after a maximum size or point in development has been reached, a "secondary wall" is constructed between the plasma membrane and primary wall. Unlike the primary wall, the cellulose microfibrils are aligned parallel in layers, the orientation changing slightly with each additional layer so that the structure becomes helicoidal. Cells with secondary cell walls can be rigid, as in the gritty sclereid cells in pear and quince fruit. Cell to cell communication is possible through pits in the secondary cell wall that allow plasmodesmata to connect cells through the secondary cell walls.
Fungal cell walls.
There are several groups of organisms that have been called "fungi". Some of these groups (Oomycete and Myxogastria) have been transferred out of the Kingdom Fungi, in part because of fundamental biochemical differences in the composition of the cell wall. Most true fungi have a cell wall consisting largely of chitin and other polysaccharides. True fungi do not have cellulose in their cell walls.
In fungi, the cell wall is the outer-most layer, external to the plasma membrane. The fungal cell wall is a matrix of three main components:
Other eukaryotic cell walls.
Algae.
Like plants, algae have cell walls. Algal cell walls contain either polysaccharides (such as cellulose (a glucan)) or a variety of glycoproteins (Volvocales) or both. The inclusion of additional polysaccharides in algal cells walls is used as a feature for algal taxonomy.
Other compounds that may accumulate in algal cell walls include sporopollenin and calcium ions.
The group of algae known as the diatoms synthesize their cell walls (also known as frustules or valves) from silicic acid. Significantly, relative to the organic cell walls produced by other groups, silica frustules require less energy to synthesize (approximately 8%), potentially a major saving on the overall cell energy budget and possibly an explanation for higher growth rates in diatoms.
In brown algae, phlorotannins may be a constituent of the cell walls.
Water molds.
The group Oomycetes, also known as water molds, are saprotrophic plant pathogens like fungi. Until recently they were widely believed to be fungi, but structural and molecular evidence has led to their reclassification as heterokonts, related to autotrophic brown algae and diatoms. Unlike fungi, oomycetes typically possess cell walls of cellulose and glucans rather than chitin, although some genera (such as "Achlya" and "Saprolegnia") do have chitin in their walls. The fraction of cellulose in the walls is no more than 4 to 20%, far less than the fraction of glucans. Oomycete cell walls also contain the amino acid hydroxyproline, which is not found in fungal cell walls.
Slime molds.
The dictyostelids are another group formerly classified among the fungi. They are slime molds that feed as unicellular amoebae, but aggregate into a reproductive stalk and sporangium under certain conditions. Cells of the reproductive stalk, as well as the spores formed at the apex, possess a cellulose wall. The spore wall has three layers, the middle one composed primarily of cellulose, while the innermost is sensitive to cellulase and pronase.
Prokaryotic cell walls.
Bacterial cell walls.
Around the outside of the cell membrane is the bacterial cell wall. Bacterial cell walls are made of peptidoglycan (also called murein), which is made from polysaccharide chains cross-linked by unusual peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of Archaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria, although L-form bacteria can be produced in the laboratory that lack a cell wall. The antibiotic penicillin is able to kill bacteria by preventing the cross-linking of peptidoglycan and this causes the cell wall to weaken and lyse. The lysozyme enzyme can also damage bacterial cell walls.
There are broadly speaking two different types of cell wall in bacteria, called gram-positive and gram-negative. The names originate from the reaction of cells to the Gram stain, a test long-employed for the classification of bacterial species.
Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids.
Gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Most bacteria have the gram-negative cell wall and only the Bacillota and Actinomycetota (previously known as the low G+C and high G+C gram-positive bacteria, respectively) have the alternative gram-positive arrangement.
These differences in structure produce differences in antibiotic susceptibility. The beta-lactam antibiotics (e.g. penicillin, cephalosporin) only work against gram-negative pathogens, such as "Haemophilus influenzae" or "Pseudomonas aeruginosa". The glycopeptide antibiotics (e.g. vancomycin, teicoplanin, telavancin) only work against gram-positive pathogens such as "Staphylococcus aureus"
Archaeal cell walls.
Although not truly unique, the cell walls of Archaea are unusual. Whereas peptidoglycan is a standard component of all bacterial cell walls, all archaeal cell walls lack peptidoglycan, though some methanogens have a cell wall made of a similar polymer called pseudopeptidoglycan. There are four types of cell wall currently known among the Archaea.
One type of archaeal cell wall is that composed of pseudopeptidoglycan (also called pseudomurein). This type of wall is found in some methanogens, such as "Methanobacterium" and "Methanothermus". While the overall structure of archaeal "pseudo"peptidoglycan superficially resembles that of bacterial peptidoglycan, there are a number of significant chemical differences. Like the peptidoglycan found in bacterial cell walls, pseudopeptidoglycan consists of polymer chains of glycan cross-linked by short peptide connections. However, unlike peptidoglycan, the sugar N-acetylmuramic acid is replaced by N-acetyltalosaminuronic acid, and the two sugars are bonded with a "β",1-3 glycosidic linkage instead of "β",1-4. Additionally, the cross-linking peptides are L-amino acids rather than D-amino acids as they are in bacteria.
A second type of archaeal cell wall is found in "Methanosarcina" and "Halococcus". This type of cell wall is composed entirely of a thick layer of polysaccharides, which may be sulfated in the case of "Halococcus". Structure in this type of wall is complex and not fully investigated.
A third type of wall among the Archaea consists of glycoprotein, and occurs in the hyperthermophiles, "Halobacterium", and some methanogens. In "Halobacterium", the proteins in the wall have a high content of acidic amino acids, giving the wall an overall negative charge. The result is an unstable structure that is stabilized by the presence of large quantities of positive sodium ions that neutralize the charge. Consequently, "Halobacterium" thrives only under conditions with high salinity.
In other Archaea, such as "Methanomicrobium" and "Desulfurococcus", the wall may be composed only of surface-layer proteins, known as an "S-layer". S-layers are common in bacteria, where they serve as either the sole cell-wall component or an outer layer in conjunction with polysaccharides. Most Archaea are Gram-negative, though at least one Gram-positive member is known.
Other cell coverings.
Many protists and bacteria produce other cell surface structures apart from cell walls, external (extracellular matrix) or internal. Many algae have a sheath or envelope of mucilage outside the cell made of exopolysaccharides. Diatoms build a frustule from silica extracted from the surrounding water; radiolarians, foraminiferans, testate amoebae and silicoflagellates also produce a skeleton from minerals, called test in some groups. Many green algae, such as "Halimeda" and the Dasycladales, and some red algae, the Corallinales, encase their cells in a secreted skeleton of calcium carbonate. In each case, the wall is rigid and essentially inorganic. It is the non-living component of cell. Some golden algae, ciliates and choanoflagellates produces a shell-like protective outer covering called lorica. Some dinoflagellates have a theca of cellulose plates, and coccolithophorids have coccoliths.
An extracellular matrix (ECM) is also present in metazoans. Its composition varies between cells, but collagens are the most abundant protein in the ECM.
|
6313
|
28481209
|
https://en.wikipedia.org/wiki?curid=6313
|
Classical element
|
The classical elements typically refer to earth, water, air, fire, and (later) aether which were proposed to explain the nature and complexity of all matter in terms of simpler substances. Ancient cultures in Greece, Angola, Tibet, India, and Mali had similar lists which sometimes referred, in local languages, to "air" as "wind", and to "aether" as "space".
These different cultures and even individual philosophers had widely varying explanations concerning their attributes and how they related to observable phenomena as well as cosmology. Sometimes these theories overlapped with mythology and were personified in deities. Some of these interpretations included atomism (the idea of very small, indivisible portions of matter), but other interpretations considered the elements to be divisible into infinitely small pieces without changing their nature.
While the classification of the material world in ancient India, Hellenistic Egypt, and ancient Greece into air, earth, fire, and water was more philosophical, during the Middle Ages medieval scientists used practical, experimental observation to classify materials. In Europe, the ancient Greek concept, devised by Empedocles, evolved into the systematic classifications of Aristotle and Hippocrates. This evolved slightly into the medieval system, and eventually became the object of experimental verification in the 17th century, at the start of the Scientific Revolution.
Modern science does not support the classical elements to classify types of substances. Atomic theory classifies atoms into more than a hundred chemical elements such as oxygen, iron, and mercury, which may form chemical compounds and mixtures. The modern categories roughly corresponding to the classical elements are the states of matter produced under different temperatures and pressures. Solid, liquid, gas, and plasma share many attributes with the corresponding classical elements of earth, water, air, and fire, but these states describe the similar behavior of different types of atoms at similar energy levels, not the characteristic behavior of certain atoms or substances.
Hellenistic philosophy.
The ancient Greek concept of four basic elements, these being earth ( ), water ( ), air ( ), and fire ( ), dates from pre-Socratic times and persisted throughout the Middle Ages and into the Early modern period, deeply influencing European thought and culture.
Pre-Socratic elements.
Primordal element.
The classical elements were first proposed independently by several early Pre-Socratic philosophers. Greek philosophers had debated which substance was the "arche" ("first principle"), or primordial element from which everything else was made. Thales () believed that water was this principle. Anaximander () argued that the primordial substance was not any of the known substances, but could be transformed into them, and they into each other. Anaximenes () favored air, and Heraclitus (fl. ) championed fire.
Fire, earth, air, and water.
The Greek philosopher Empedocles () was the first to propose the four classical elements as a set: fire, earth, air, and water. He called them the four "roots" (, ). Empedocles also proved (at least to his own satisfaction) that air was a separate substance by observing that a bucket inverted in water did not become filled with water, a pocket of air remaining trapped inside.
Fire, earth, air, and water have become the most popular set of classical elements in modern interpretations. One such version was provided by Robert Boyle in "The Sceptical Chymist", which was published in 1661 in the form of a dialogue between five characters. "Themistius," the Aristotelian of the party, says:
Humorism (Hippocrates).
According to Galen, these elements were used by Hippocrates () in describing the human body with an association with the four humours: yellow bile (fire), black bile (earth), blood (air), and phlegm (water). Medical care was primarily about helping the patient stay in or return to their own personal natural balanced state.
Plato.
Plato (428/423 – 348/347 BC) seems to have been the first to use the term "element (, )" in reference to air, fire, earth, and water. The ancient Greek word for element, (from , "to line up") meant "smallest division (of a sun-dial), a syllable", as the composing unit of an alphabet it could denote a letter and the smallest unit from which a word is formed.
Aristotle.
In "On the Heavens" (350 BC), Aristotle defines "element" in general:
In his "On Generation and Corruption", Aristotle related each of the four elements to two of the four sensible qualities:
A classic diagram has one square inscribed in the other, with the corners of one being the classical elements, and the corners of the other being the properties. The opposite corner is the opposite of these properties, "hot – cold" and "dry – wet".
Aether.
Aristotle added a fifth element, aether ( ), as the quintessence, reasoning that whereas fire, earth, air, and water were earthly and corruptible, since no changes had been perceived in the heavenly regions, the stars cannot be made out of any of the four elements but must be made of a different, unchangeable, heavenly substance. It had previously been believed by pre-Socratics such as Empedocles and Anaxagoras that aether, the name applied to the material of heavenly bodies, was a form of fire. Aristotle himself did not use the term "aether" for the fifth element, and strongly criticised the pre-Socratics for associating the term with fire. He preferred a number of other terms indicating eternal movement, thus emphasising the evidence for his discovery of a new element. These five elements have been associated since Plato's "Timaeus" with the five platonic solids. Earth was associated with the cube, air with the octahedron, water with the icosahedron, and fire with the tetrahedron. Of the fifth Platonic solid, the dodecahedron, Plato obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven". Aristotle added a fifth element, aither (aether in Latin, "ether" in English) and postulated that the heavens were made of this element, but he had no interest in matching it with Plato's fifth solid.
Neo-Platonism.
The Neoplatonic philosopher Proclus rejected Aristotle's theory relating the elements to the sensible qualities hot, cold, wet, and dry. He maintained that each of the elements has three properties. Fire is sharp (ὀξυτητα), subtle (λεπτομερειαν), and mobile (εὐκινησιαν) while its opposite, earth, is blunt (αμβλυτητα), dense (παχυμερειαν), and immobile (ακινησιαν); they are joined by the intermediate elements, air and water, in the following fashion:
Hermeticism.
A text written in Egypt in Hellenistic or Roman times called the "Kore Kosmou" ("Virgin of the World") ascribed to Hermes Trismegistus (associated with the Egyptian god Thoth), names the four elements fire, water, air, and earth. As described in this book:
Ancient Indian philosophy.
Hinduism.
The system of five elements are found in Vedas, especially Ayurveda, the "pancha mahabhuta", or "five great elements", of Hinduism are:
They further suggest that all of creation, including the human body, is made of these five essential elements and that upon death, the human body dissolves into these five elements of nature, thereby balancing the cycle of nature.
The five elements are associated with the five senses, and act as the gross medium for the experience of sensations. The basest element, earth, created using all the other elements, can be perceived by all five senses — (i) hearing, (ii) touch, (iii) sight, (iv) taste, and (v) smell. The next higher element, water, has no odor but can be heard, felt, seen and tasted. Next comes fire, which can be heard, felt and seen. Air can be heard and felt. "Akasha" (aether) is beyond the senses of smell, taste, sight, and touch; it being accessible to the sense of hearing alone.
Buddhism.
Buddhism has had a variety of thought about the five elements and their existence and relevance, some of which continue to this day.
In the Pali literature, the "mahabhuta" ("great elements") or "catudhatu" ("four elements") are earth, water, fire and air. In early Buddhism, the four elements are a basis for understanding suffering and for liberating oneself from suffering. The earliest Buddhist texts explain that the four primary material elements are solidity, fluidity, temperature, and mobility, characterized as earth, water, fire, and air, respectively.
The Buddha's teaching regarding the four elements is to be understood as the base of all observation of real sensations rather than as a philosophy. The four properties are cohesion (water), solidity or inertia (earth), expansion or vibration (air) and heat or energy content (fire). He promulgated a categorization of mind and matter as composed of eight types of "kalapas" of which the four elements are primary and a secondary group of four are colour, smell, taste, and nutriment which are derivative from the four primaries.
Thanissaro Bhikkhu (1997) renders an extract of Shakyamuni Buddha's from Pali into English thus:
Tibetan Buddhist medical literature speaks of the (five elements) or "elemental properties": earth, water, fire, wind, and space. The concept was extensively used in traditional Tibetan medicine. Tibetan Buddhist theology, tantra traditions, and "astrological texts" also spoke of them making up the "environment, [human] bodies," and at the smallest or "subtlest" level of existence, parts of thought and the mind. Also at the subtlest level of existence, the elements exist as "pure natures represented by the five female buddhas", Ākāśadhātviśvarī, Buddhalocanā, Mamakī, Pāṇḍarāvasinī, and Samayatārā, and these pure natures "manifest as the physical properties of earth (solidity), water (fluidity), fire (heat and light), wind (movement and energy), and" the expanse of space. These natures exist as all "qualities" that are in the physical world and take forms in it.
Ancient African philosophy.
Angola.
In traditional Bakongo religion, the five elements are incorporated into the Kongo cosmogram. This sacred symbol also depicts the physical world ("Nseke"), the spiritual world of the ancestors ("Mpémba"), the Kalûnga line that runs between the two worlds, the circular void that originally formed the two worlds ("mbûngi"), and the path of the sun. Each element correlates to a period in the life cycle, which the Bakongo people also equate to the four cardinal directions. According to their cosmology, all living things go through this cycle.
Mali.
In traditional Bambara spirituality, the Supreme God created four additional essences of himself during creation. Together, these five essences of the deity correlate with the five classical elements.
Post-classical history.
Alchemy.
The elemental system used in medieval alchemy was developed primarily by the anonymous authors of the Arabic works attributed to Pseudo Apollonius of Tyana. This system consisted of the four classical elements of air, earth, fire, and water, in addition to a new theory called the sulphur-mercury theory of metals, which was based on two elements: sulphur, characterizing the principle of combustibility, "the stone which burns"; and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducible components of the universe and are of larger consideration within philosophical alchemy.
The three metallic principles—sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity—became the "tria prima" of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt).
Chinese.
Chinese traditional concepts adopt a set of elements called the ("wuxing", literally "five phases"). These five are Metal or Gold (金 "Jīn"), Wood (木 "Mù"), Water (水 "Shuǐ"), Fire (火 "Huǒ"), and Earth or Soil (土 "Tǔ"). These can be linked to Taiji, Yinyang, Four Symbols, Bagua, Hexagram and I Ching.
Japanese.
Japanese traditions use a set of elements called the ("godai", literally "five great"). These five are earth, water, fire, wind/air, and void. These came from Indian Vastu shastra philosophy and Buddhist beliefs; in addition, the classical Chinese elements (, "wu xing") are also prominent in Japanese culture, especially to the influential Neo-Confucianists during the medieval Edo period.
Medieval Aristotelian philosophy.
The Islamic philosophers al-Kindi, Avicenna and Fakhr al-Din al-Razi followed Aristotle in connecting the four elements with the four natures heat and cold (the active force), and dryness and moisture (the recipients).
Medicine Wheel.
The medicine wheel symbol is a modern invention attributed to Native American peoples dating to approximately 1972, with the following descriptions and associations being a later addition. The associations with the classical elements are not grounded in traditional Indigenous teachings and the symbol has not been adopted by all Indigenous American nations.
Modern history.
Chemical element.
The Aristotelian tradition and medieval alchemy eventually gave rise to modern chemistry, scientific theories and new taxonomies. By the time of Antoine Lavoisier, for example, a list of elements would no longer refer to classical elements. Some modern scientists see a parallel between the classical elements and the four states of matter: solid, liquid, gas and weakly ionized plasma.
Modern science recognizes classes of elementary particles which have no substructure (or rather, particles that are not made of other particles) and composite particles having substructure (particles made of other particles).
Western astrology.
Western astrology uses the four classical elements in connection with astrological charts and horoscopes. The twelve signs of the zodiac are divided into the four elements: Fire signs are Aries, Leo and Sagittarius, Earth signs are Taurus, Virgo and Capricorn, Air signs are Gemini, Libra and Aquarius, and Water signs are Cancer, Scorpio, and Pisces.
Criticism.
The Dutch historian of science Eduard Jan Dijksterhuis writes that the theory of the classical elements "was bound to exercise a really harmful influence. As is now clear, Aristotle, by adopting this theory as the basis of his interpretation of nature and by never losing faith in it, took a course which promised few opportunities and many dangers for science." Bertrand Russell says that Aristotle's thinking became imbued with almost biblical authority in later centuries. So much so that "Ever since the beginning of the seventeenth century, almost every serious intellectual advance has had to begin with an attack on some Aristotelian doctrine".
|
6314
|
57939
|
https://en.wikipedia.org/wiki?curid=6314
|
Fire (classical element)
|
Fire is one of the four classical elements along with earth, water and air in ancient Greek philosophy and science. Fire is considered to be both hot and dry and, according to Plato, is associated with the tetrahedron.
Greek and Roman tradition.
Fire is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with the qualities of energy, assertiveness, and passion. In one Greek myth, Prometheus stole "fire" from the gods to protect the otherwise helpless humans, but was punished for this charity.
Fire was one of many "archai" proposed by the pre-Socratics, most of whom sought to reduce the cosmos, or its creation, to a single substance. Heraclitus considered "fire" to be the most fundamental of all elements. He believed fire gave rise to the other three elements: "All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods." He had a reputation for obscure philosophical principles and for speaking in riddles. He described how fire gave rise to the other elements as the: "upward-downward path", (), a "hidden harmony" or series of transformations he called the "turnings of fire", (), first into "sea", and half that "sea" into "earth", and half that "earth" into rarefied "air". This is a concept that anticipates both the four classical elements of Empedocles and Aristotle's transmutation of the four elements into one another.
This world, which is the same for all, no one of gods or men has made. But it always was and will be: an ever-living fire, with measures of it kindling, and measures going out.
Heraclitus regarded the soul as being a mixture of fire and water, with fire being the more noble part and water the ignoble aspect. He believed the goal of the soul is to be rid of water and become pure fire: the dry soul is the best and it is worldly pleasures that make the soul "moist". He was known as the "weeping philosopher" and died of hydropsy, a swelling due to abnormal accumulation of fluid beneath the skin.
However, Empedocles of Akragas , is best known for having selected all elements as his "archai" and by the time of Plato , the four Empedoclian elements were well established. In the "Timaeus", Plato's major cosmological dialogue, the Platonic solid he associated with fire was the tetrahedron which is formed from four triangles and contains the least volume with the greatest surface area. This also makes fire the element with the smallest number of sides, and Plato regarded it as appropriate for the heat of fire, which he felt is sharp and stabbing, (like one of the points of a tetrahedron).
Plato's student Aristotle did not maintain his former teacher's geometric view of the elements, but rather preferred a somewhat more naturalistic explanation for the elements based on their traditional qualities. Fire the hot and dry element, like the other elements, was an abstract principle and not identical with the normal solids, liquids and combustion phenomena we experience:
What we commonly call fire. It is not really fire, for fire is an excess of heat and a sort of ebullition; but in reality, of what we call air, the part surrounding the earth is moist and warm, because it contains both vapour and a dry exhalation from the earth.
According to Aristotle, the four elements rise or fall toward their natural place in concentric layers surrounding the center of the Earth and form the terrestrial or sublunary spheres.
In ancient Greek medicine, each of the four humours became associated with an element. Yellow bile was the humor identified with fire, since both were hot and dry. Other things associated with fire and yellow bile in ancient and medieval medicine included the season of summer, since it increased the qualities of heat and aridity; the choleric temperament (of a person dominated by the yellow bile humour); the masculine; and the eastern point of the compass.
In alchemy the chemical element of sulfur was often associated with fire and its alchemical symbol and its symbol was an upward-pointing triangle. In alchemic tradition, metals are incubated by fire in the womb of the Earth and alchemists only accelerate their development.
Indian tradition.
Agni is a Hindu and Vedic deity. The word "agni" is Sanskrit for fire (noun), cognate with Latin "ignis" (the root of English "ignite"), Russian "огонь" (fire), pronounced "agon". Agni has three forms: fire, lightning and the sun.
Agni is one of the most important of the Vedic gods. He is the god of fire and the accepter of sacrifices. The sacrifices made to Agni go to the deities because Agni is a messenger from and to the other gods. He is ever-young, because the fire is re-lit every day, yet he is also immortal. In Indian tradition fire is also linked to Surya or the Sun and Mangala or Mars, and with the south-east direction.
Teukāya ekendriya is a name used in Jain tradition which refers to Jīvas said to be reincarnated as fire.
Ceremonial magic.
Fire and the other Greek classical elements were incorporated into the Golden Dawn system. Philosophus (4=7) is the elemental grade attributed to fire; this grade is also attributed to the Qabalistic Sephirah Netzach and the planet Venus. The elemental weapon of fire is the Wand. Each of the elements has several associated spiritual beings. The archangel of fire is Michael, the angel is Aral, the ruler is Seraph, the king is Djin, and the fire elementals (following Paracelsus) are called salamanders. Fire is considered to be active; it is represented by the symbol for Leo and it is referred to the lower right point of the pentacle in the Supreme Invoking Ritual of the Pentacle. Many of these associations have since spread throughout the occult community.
Tarot.
Fire in tarot symbolizes conversion or passion. Many references to fire in tarot are related to the usage of fire in the practice of alchemy, in which the application of fire is a prime method of conversion, and everything that touches fire is changed, often beyond recognition. The symbol of fire was a cue pointing towards transformation, the chemical variant being the symbol delta, which is also the classical symbol for fire. Conversion symbolized can be good, for example, refining raw crudities to gold, as seen in The Devil. Conversion can also be bad, as in The Tower, symbolizing a downfall due to anger. Fire is associated with the suit of rods/wands, and as such, represents passion from inspiration. As an element, fire has mixed symbolism because it represents energy, which can be helpful when controlled, but volatile if left unchecked.
Modern witchcraft.
Fire is one of the five elements that appear in most Wiccan traditions influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn.
Freemasonry.
In freemasonry, fire is present, for example, during the ceremony of winter solstice, a symbol also of renaissance and energy. Freemasonry takes the ancient symbolic meaning of fire and recognizes its double nature: creation, light, on the one hand, and destruction and purification, on the other.
|
6315
|
1284082444
|
https://en.wikipedia.org/wiki?curid=6315
|
Air (classical element)
|
Air or Wind is one of the four classical elements along with water, earth and fire in ancient Greek philosophy and in Western alchemy.
Greek and Roman tradition.
According to Plato, it is associated with the octahedron; air is considered to be both hot and wet. The ancient Greeks used two words for air: "aer" meant the dim lower atmosphere, and "aether" meant the bright upper atmosphere above the clouds. Plato, for instance writes that "So it is with air: there is the brightest variety which we call "aether", the muddiest which we call mist and darkness, and other kinds for which we have no name..." Among the early Greek Pre-Socratic philosophers, Anaximenes (mid-6th century BCE) named air as the "arche". A similar belief was attributed by some ancient sources to Diogenes Apolloniates (late 5th century BCE), who also linked air with intelligence and soul ("psyche"), but other sources claim that his "arche" was a substance between air and fire. Aristophanes parodied such teachings in his play "The Clouds" by putting a prayer to air in the mouth of Socrates.
Air was one of many "archai" proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495-c. 435 BCE) selected four "archai" for his four roots: air, fire, water, and earth. Ancient and modern opinions differ as to whether he identified air by the divine name Hera, Aidoneus or even Zeus. Empedocles’ roots became the four classical elements of Greek philosophy. Plato (427–347 BCE) took over the four elements of Empedocles. In the "Timaeus", his major cosmological dialogue, the Platonic solid associated with air is the octahedron which is formed from eight equilateral triangles. This places air between fire and water which Plato regarded as appropriate because it is intermediate in its mobility, sharpness, and ability to penetrate. He also said of air that its minuscule components are so smooth that one can barely feel them.
Plato's student Aristotle (384–322 BCE) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the universe to form the sublunary sphere. According to Aristotle, air is both hot and wet and occupies a place between fire and water among the elemental spheres. Aristotle definitively separated air from aether. For him, aether was an unchanging, almost divine substance that was found only in the heavens, where it formed celestial spheres.
Humorism and temperaments.
In ancient Greek medicine, each of the four humours became associated with an element. Blood was the humor identified with air, since both were hot and wet. Other things associated with air and blood in ancient and medieval medicine included the season of spring, since it increased the qualities of heat and moisture; the sanguine temperament (of a person dominated by the blood humour); hermaphrodite (combining the masculine quality of heat with the feminine quality of moisture); and the northern point of the compass.
Alchemy.
The alchemical symbol for air is an upward-pointing triangle, bisected by a horizontal line.
Modern reception.
The Hermetic Order of the Golden Dawn, founded in 1888, incorporates air and the other Greek classical elements into its teachings. The elemental weapon of air is the dagger which must be painted yellow with magical names and sigils written upon it in violet. Each of the elements has several associated spiritual beings. The archangel of air is Raphael, the angel is Chassan, the ruler is Ariel, the king is Paralda, and the air elementals (following Paracelsus) are called sylphs. Air is considerable and it is referred to the upper left point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community.
In the Golden Dawn and many other magical systems, each element is associated with one of the cardinal points and is placed under the care of guardian Watchtowers. The Watchtowers derive from the Enochian system of magic founded by Dee. In the Golden Dawn, they are represented by the Enochian elemental tablets. Air is associated with the east, which is guarded by the First Watchtower.
Air is one of the five elements that appear in most Wiccan and Pagan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism.
Parallels in non-Western traditions.
Air is not one of the traditional five Chinese classical elements. Nevertheless, the ancient Chinese concept of "Qi" or "chi" is believed to be close to that of air. "Qi" is believed to be part of every living thing that exists, as a kind of "life force" or "spiritual energy". It is frequently translated as "energy flow", or literally as "air" or "breath". (For example, "tiānqì", literally "sky breath", is the Chinese word for "weather"). The concept of qi is often reified, however no scientific evidence supports its existence.
The element air also appears as a concept in the Buddhist philosophy which has an ancient history in China.
Some Western modern occultists equate the Chinese classical element of metal with "air", others with wood due to the elemental association of wind and wood in the bagua.
Enlil was the god of air in ancient Sumer. Shu was the ancient Egyptian deity of air and the husband of Tefnut, goddess of moisture. He became an emblem of strength by virtue of his role in separating Nut from Geb. Shu played a primary role in the Coffin Texts, which were spells intended to help the deceased reach the realm of the afterlife safely. On the way to the sky, the spirit had to travel through the air as one spell indicates: "I have gone up in Shu, I have climbed on the sunbeams."
According to Jain beliefs, the element air is inhabited by one-sensed beings or spirits called vāyukāya ekendriya, sometimes said to inhabit various kinds of winds such as whirlwinds, cyclones, monsoons, west winds and trade winds. Prior to reincarnating into another lifeform, spirits can remain as vāyukāya ekendriya from anywhere between one instant to up to three-thousand years, depending on the karma of the spirits.
|
6316
|
1299986411
|
https://en.wikipedia.org/wiki?curid=6316
|
Water (classical element)
|
Water is one of the classical elements in ancient Greek philosophy along with air, earth and fire, in the Asian Indian system "Panchamahabhuta", and in the Chinese cosmological and physiological system "Wu Xing". In contemporary esoteric traditions, it is commonly associated with the qualities of emotion and intuition.
Greek and Roman tradition.
Water was one of many "archai" proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495 – c. 435 BC) selected four archai for his four roots: air, fire, water and earth. Empedocles' roots became the four classical elements of Greek philosophy. Plato (427–347 BC) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with water is the icosahedron which is formed from twenty equilateral triangles. This makes water the element with the greatest number of sides, which Plato regarded as appropriate because water flows out of one's hand when picked up, as if it is made of tiny little balls.
Plato's student Aristotle (384–322 BC) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the Universe to form the sublunary sphere. According to Aristotle, water is both cold and wet and occupies a place between air and earth among the elemental spheres.
In ancient Greek medicine, each of the four humours became associated with an element. Phlegm was the humor identified with water, since both were cold and wet. Other things associated with water and phlegm in ancient and medieval medicine included the season of Winter, since it increased the qualities of cold and moisture, the phlegmatic temperament, the feminine and the western point of the compass.
In alchemy, the chemical element of mercury was often associated with water and its alchemical symbol was a downward-pointing triangle.
Indian tradition.
Ap (') is the Vedic Sanskrit term for water, in Classical Sanskrit occurring only in the plural is not an element.v, ' (sometimes re-analysed as a thematic singular, '), whence Hindi '. The term is from PIE "hxap" water.
In Hindu philosophy, the term refers to
water as an element, one of the "Panchamahabhuta," or "five great elements". In Hinduism, it is also the name of the deva, a personification of water, (one of the Vasus in most later Puranic lists). The element water is also associated with Chandra or the moon and Shukra, who represent feelings, intuition and imagination.
According to Jain tradition, water itself is inhabited by spiritual Jīvas called apakāya ekendriya.
Ceremonial magic.
Water and the other Greek classical elements were incorporated into the Golden Dawn system. The elemental weapon of water is the cup. Each of the elements has several associated spiritual beings. The archangel of water is Gabriel, the angel is Taliahad, the ruler is Tharsis, the king is Nichsa and the water elementals are called Ondines. It is referred to the upper right point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community.
Modern witchcraft.
Water is one of the five elements that appear in most Wiccan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn.
|
6317
|
10202399
|
https://en.wikipedia.org/wiki?curid=6317
|
Earth (classical element)
|
Earth is one of the classical elements, in some systems being one of the four along with air, fire, and water.
European tradition.
Earth is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with qualities of heaviness, matter and the terrestrial world. Due to the hero cults, and chthonic underworld deities, the element of "earth" is also associated with the sensual aspects of both life and death in later occultism.
Empedocles of Acragas proposed four "archai" by which to understand the cosmos: "fire"," air", "water", and "earth". Plato (427–347 BCE) believed the elements were geometric forms (the platonic solids) and he assigned the cube to the element of "earth" in his dialogue "Timaeus". Aristotle (384–322 BCE) believed "earth" was the heaviest element, and his theory of "natural place" suggested that any "earth–laden" substances, would fall quickly, straight down, towards the center of the "cosmos".
In Classical Greek and Roman myth, various goddesses
represented the Earth, seasons, crops and fertility, including Demeter and Persephone; Ceres; the Horae (goddesses of the seasons), and Proserpina; and Hades (Pluto) who ruled the souls of dead in the Underworld.
In ancient Greek medicine, each of the four humours became associated with an element. Black bile was the humor identified with earth, since both were cold and dry. Other things associated with earth and black bile in ancient and medieval medicine included the season of fall, since it increased the qualities of cold and aridity; the melancholic temperament (of a person dominated by the black bile humour); the feminine; and the southern point of the compass.
In alchemy, earth was believed to be primarily dry, and secondarily cold, (as per Aristotle). Beyond those classical attributes, the chemical substance salt, was associated with earth and its alchemical symbol was a downward-pointing triangle, bisected by a horizontal line.
Indian tradition.
Prithvi (Sanskrit: ', also ') is the Hindu "earth" and mother goddess. According to one such tradition, she is the personification of the Earth itself; according to another, its actual mother, being "Prithvi Tattwa", the essence of the element earth.
As "Prithvi Mata", or "Mother Earth", she contrasts with "Dyaus Pita", "father sky". In the Rigveda, "earth" and sky are frequently addressed as a duality, often indicated by the idea of two complementary "half-shells." In addition, the element Earth is associated with Budha or Mercury who represents communication, business, mathematics and other practical matters.
Jainism mentions one-sensed beings or spirits believed to inhabit the element earth sometimes classified as pṛthvīkāya ekendriya.
Ceremonial magic.
Earth and the other Greek classical elements were incorporated into the Golden Dawn system. Zelator is the elemental grade attributed to earth; this grade is also attributed to the Sephirot of Malkuth. The elemental weapon of earth is the Pentacle. Each of the elements has several associated spiritual beings. The archangel of earth is Uriel, the angel is Phorlakh, the ruler is Kerub, the king is Ghob, and the earth elementals (following Paracelsus) are called gnomes. Earth is considered to be passive; it is represented by the symbol for Taurus, and it is referred to the lower left point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community.
It is sometimes represented by its Tattva or by a downward pointing triangle with a horizontal line through it.
Modern witchcraft.
Earth is one of the five elements that appear in most Wiccan and Pagan traditions. Wicca in particular was influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism which was in turn inspired by the Golden Dawn.
Other traditions.
"Earth" is represented in the Aztec religion by a house; to the Hindus, a lotus; to the Scythians, a plough; to the Greeks, a wheel; and in Christian iconography; bulls and birds.
|
6319
|
1295075931
|
https://en.wikipedia.org/wiki?curid=6319
|
Blue Jam
|
Blue Jam was an ambient, surreal dark comedy and horror radio programme created and directed by Chris Morris. It was broadcast on BBC Radio 1 in the early hours of the morning, for three series from 1997 to 1999.
The programme gained cult status due to its unique mix of surreal monologue, ambient soundtrack, synthesised voices, heavily edited broadcasts and recurring sketches. It featured vocal performances of Kevin Eldon, Julia Davis, Mark Heap, David Cann and Amelia Bullmore, with Morris himself delivering disturbing monologues, one of which was revamped and made into the BAFTA-winning short film "My Wrongs #8245–8249 & 117". Writers who contributed to the programme included Graham Linehan, Arthur Mathews, Peter Baynham, David Quantick, Jane Bussmann, Robert Katz and the cast.
The programme was adapted into the TV series "Jam", which aired in 2000.
Production.
On his inspiration for making the show, Morris commented: "It was so singular, and it came from a mood, quite a desolate mood. I had this misty, autumnal, boggy mood anyway, so I just went with that. But no doubt getting to the end of something like "Brass Eye", where you've been forced to be a sort of surrogate lawyer, well, that's the most creatively stifling thing you could possibly do." Morris also described the show as being "like the nightmares you have when you fall asleep listening to the BBC World Service" (a reference to the World Service also appears in one of the monologues read by Morris).
Morris originally requested that the show be broadcast at 3 a.m. on Radio 1 "because at that hour, on insomniac radio, the amplitude of terrible things is enormously overblown". As a compromise, the show was broadcast at midnight without much promotion. Morris reportedly included sketches too graphic or transgressive for radio that he knew would be cut so as to make his other material seem less transgressive in comparison. During the airing of episode 6 of series one, a re-editing of the Archbishop of Canterbury's speech at Princess Diana's funeral was deemed too offensive for broadcast, and was switched with a different episode as it aired.
Format and style.
Each episode opened (and closed) with a short spoken monologue (delivered by Morris) describing, in surreal, broken language, various bizarre feelings and situations (for example: "when you sick so sad you cry, and in crying cry a whole leopard from your eye"), set to ambient music interspersed with short clips of other songs and sounds. The introduction would always end with "welcome in Blue Jam", inviting the listener, who is presumably experiencing such feelings, to get lost in the program. (This format was replicated in the television adaptation "Jam", often reusing opening monologues from series 3 of the radio series.) The sketches within dealt with heavy and taboo topics, such as murder, suicide, missing or dead children, and rape.
Common recurring sketches.
The sketches not listed are often in the style of a documentary; characters speak as if being interviewed about a recent event. In one sketch, a character voiced by Morris describes a man attempting to commit suicide by jumping off a second-story balcony repeatedly; in another, an angry man (Eldon) shouts about how his car, after being picked up from the garage, is only four feet long.
Radio stings.
Morris included a series of 'radio stings', bizarre sequences of sounds and prose as a parody of modern DJs' own soundbites and self-advertising pieces. Each one revolves around a contemporary DJ, such as Chris Moyles, Jo Whiley and Mark Goodier, typically involving each DJ dying in a graphic way or going mad in some form – for example, Chris Moyles covering himself in jam and hanging himself from the top of a building.
Episodes.
Three series were produced, with a total of eighteen episodes. All episodes were originally broadcast weekly on BBC Radio 1. Series 1 was broadcast from 14 November to 19 December 1997; series 2 was broadcast from 27 March to 1 May 1998; and series 3 broadcast from 21 January to 25 February 1999.
The first five episodes of series 1 of "Blue Jam" were repeated by BBC Radio 4 Extra in February and March 2014, and series 2 was rebroadcast in December.
Music.
"Blue Jam" features songs, generally of a downtempo nature, interspersed between (and sometimes during) sketches. Artists featured includes Massive Attack, Air, Morcheeba, The Chemical Brothers, Björk, Aphex Twin, Everything But the Girl and Dimitri from Paris, as well as various non-electronic artists including Sly and the Family Stone, Serge Gainsbourg, The Cardigans and Eels.
Reception.
"Blue Jam" was favourably reviewed on several occasions by "The Guardian" and also received a positive review by "The Independent".
Digital Spy wrote in 2014: "It's a heady cocktail that provokes an odd, unsettling reaction in the listener, yet "Blue Jam" is still thumpingly and frequently laugh-out-loud hilarious." "Hot Press" called it "as odd as comedy gets".
CD release.
A CD of a number of "Blue Jam" sketches was released on 23 October 2000 by record label Warp. Although the CD claims to have 22 tracks, the last one, "www.bishopslips.com", is not a track, but rather a reference to the "Bishopslips" sketch, which was cut in the middle of a broadcast. Most of the sketches on the CD were remade for "Jam".
Related shows.
"Blue Jam" was later made for television and broadcast on Channel 4 as "Jam". It used unusual editing techniques to achieve an unnerving ambience in keeping with the radio show. Many of the sketches were lifted from the radio version, even to the extent of simply setting images to the radio soundtrack. A subsequent "re-mixed" airing, called "Jaaaaam" was even more extreme in its use of post-production gadgetry, often heavily distorting the footage.
|
6321
|
48271590
|
https://en.wikipedia.org/wiki?curid=6321
|
Channel 4
|
Channel 4 is a British free-to-air public broadcast television channel owned and operated by Channel Four Television Corporation. It is publicly owned but, unlike the BBC, it receives no public funding and is funded entirely by its commercial activities, including advertising. It began its transmission in 1982 and was established to provide a fourth television service in the United Kingdom. At the time, the only other channels were the licence-funded BBC1 and BBC2, and a single commercial broadcasting network, ITV.
Originally a subsidiary of the Independent Broadcasting Authority (IBA), the station is now owned and operated by Channel Four Television Corporation, a public corporation of the Department for Culture, Media and Sport, which was established in 1990 and came into operation in 1993. Until 2010, Channel 4 did not broadcast in Wales, but many of its programmes were re-broadcast there by the Welsh fourth channel S4C. In 2010, Channel 4 extended service into Wales and became a nationwide television channel. The network's headquarters are in London and Leeds, with creative hubs in Manchester, Glasgow and Bristol.
History.
Conception.
Before Channel 4 and S4C, Britain had three terrestrial television services: BBC1, BBC2, and ITV, with BBC2 the last to launch in 1964. The Broadcasting Act 1980 began the process of adding a fourth channel; Channel Four Television Company was formally created in 1981, along with its Welsh counterpart.
The notion of a second commercial broadcaster in the United Kingdom had been around since the inception of ITV in 1954 and its subsequent launch in 1955; the idea of an "ITV2" was long expected and pushed for. Indeed, television sets sold throughout the 1970s and early 1980s often had a spare tuning button labelled "ITV 2" or "IBA 2". Throughout ITV's history and until Channel 4 finally became a reality, a perennial dialogue existed between the GPO, the government, the ITV companies and other interested parties, concerning the form such an expansion of commercial broadcasting would take. Most likely, politics had the biggest impact leading to a delay of almost three decades before the second commercial channel became a reality.
One benefit of the late arrival of the channel was that its frequency allocations at each transmitter had already been arranged in the early 1960s when the launch of an "ITV2" was anticipated. This led to good coverage across most of the country and few problems of interference with other UK-based transmissions; a stark contrast to the difficulties associated with Channel 5's launch almost 15 years later.
Wales.
At the time the fourth service was being considered, a movement in Wales lobbied for the creation of dedicated service that would air Welsh language programmes, then only catered for at off-peak times on BBC Wales and HTV. The campaign was taken so seriously by Gwynfor Evans, former president of Plaid Cymru, that he threatened the government with a hunger strike were it not to honour the plans.
The result was that Channel 4 as seen by the rest of the United Kingdom would be replaced in Wales by S4C (Sianel Pedwar Cymru, meaning "Channel Four Wales" in Welsh). Operated by a specially created authority, S4C would air programmes in Welsh made by HTV, the BBC and independent companies. Initially, limited frequency space meant that Channel 4 could not be broadcast alongside S4C, though some Channel 4 programmes would be aired at less popular times on the Welsh variant; this practice continued until the closure of S4C's analogue transmissions in 2010, at which time S4C became a fully Welsh channel. With this conversion of the Wenvoe transmitter group in Wales to digital terrestrial broadcasting on 31 March 2010, Channel 4 became a nationwide television channel for the first time.
Since then, carriage on digital cable, satellite and digital terrestrial has introduced Channel 4 to Welsh homes where it is now universally available.
1982–1992: Launch and IBA control.
After some months of test broadcasts, the new broadcaster began scheduled transmissions on 2 November 1982 from Scala House, the former site of the Scala Theatre. Its initial broadcasts reached 87% of the United Kingdom.
The first voice heard on Channel 4's opening day of 2 November 1982 was that of continuity announcer Paul Coia who said: "Good afternoon. It's a pleasure to be able to say to you, welcome to Channel 4." Following the announcement, the channel played a montage of clips from its programmes set to the station's signature tune, "Fourscore", written by David Dundas, which would form the basis of the station's jingles for its first decade. The first programme to air on the channel was the teatime game show "Countdown", produced by Yorkshire Television, at 16:45. The first person to be seen on Channel 4 was Richard Whiteley, with Ted Moult being the second. Whiteley hosted the gameshow for 23 years until his death in 2005. The first woman on the channel, contrary to popular belief, was not Whiteley's "Countdown" co-host Carol Vorderman, but a lexicographer only ever identified as Mary. Whiteley opened the show with the words: "As the countdown to a brand new channel ends, a brand new countdown begins." On its first day, Channel 4 also broadcast the soap opera "Brookside", which often ran storylines thought to be controversial; this ran until 2003.
After three days, ITV chiefs called for founding chief executive Jeremy Isaacs to resign due to poor ratings. Critics called it "Channel Bore" and "Channel Snore".
At its launch, Channel 4 committed itself to providing an alternative to the existing channels, an agenda in part set out by its remit which required the provision of programming to minority groups. In step with its remit, the channel became well received both by minority groups and the arts and cultural worlds during this period under Isaacs, during which the channel gained a reputation for programmes on the contemporary arts. Two programmes captured awards from the Broadcasting Press Guild in March 1983: best comedy for "The Comic Strip Presents…Five Go Mad in Dorset," and best on-screen performance in a non-acting role for Tom Keating in his series "On Painters". Channel 4 co-commissioned Robert Ashley's television opera "Perfect Lives", which it premiered over several episodes in 1984. The channel often did not receive mass audiences for much of this period, as might be expected for a station focusing on minority interests. During this time, Channel 4 also began the funding of independent films, such as the Merchant Ivory docudrama "The Courtesans of Bombay".
In 1987, Richard Attenborough replaced Edmund Dell as chairman. In 1988, Michael Grade became CEO.
In 1992, Channel 4 faced its first libel case which was brought by Jani Allan, a South African journalist, who objected to her representation in Nick Broomfield's documentary "The Leader, His Driver and the Driver's Wife".
1993–2006: Channel Four Television Corporation.
After control of the station passed from the Channel Four Television Company to the Channel Four Television Corporation in 1993, a shift in broadcasting style took place. Instead of aiming for minority tastes, it began to focus on the edges of the mainstream, and the centre of the mass market itself. It began to show many American programmes in peak viewing time, far more than it had previously done.
In September 1993, the channel broadcast the direct-to-TV documentary film "Beyond Citizen Kane", in which it displayed the dominant position of the Rede Globo (now TV Globo) television network, and discussed its influence, power, and political connections in Brazil.
Throughout the 1990s and 2000s, Channel 4 gave many popular and influential American comedy and drama series their first exposure on British television, such as "Friends", "Cheers", "Will & Grace", "NYPD Blue", "ER", "Desperate Housewives", ', "Without a Trace", "Home Improvement", "Frasier", "Lost", "Nip/Tuck", "Third Watch", "The West Wing", "Ally McBeal", "Freaks and Geeks", "Roseanne", "Dawson's Creek", "Oz", "Sex and the City", "The Sopranos", "Scrubs," "King of the Hill, Babylon 5", "Stargate SG-1", ', "Andromeda," "Family Guy", "South Park" and "Futurama".
In the early 2000s, Channel 4 began broadcasting reality formats such as "Big Brother" and obtained the rights to broadcast mass appeal sporting events like cricket and horse racing. This new direction increased ratings and revenues. The popularity of "Big Brother" led to the launches of other, shorter-lived new reality shows to chase the populist audience, such as "The Salon", "Shattered" and "Space Cadets".
In addition, the corporation launched several new television channels through its new 4Ventures offshoot, including Film4, At the Races, E4 and More4.
Partially in reaction to its new "populist" direction, the Communications Act 2003 directed the channel to demonstrate innovation, experimentation, and creativity, appeal to the tastes and interests of a culturally diverse society, and include programmes of an educational nature which exhibit a distinctive character.
On 31 December 2004, Channel 4 launched a new visual identity in which the logo is disguised as different objects and the "4" can be seen from an angle.
Under the leadership of Freeview founder Andy Duncan, 2005 saw a change of direction for Channel 4's digital channels. The company made E4 free-to-air on digital terrestrial television, and launched a new free-to-air digital channel called More4. By October, Channel 4 had joined the Freeview consortium. By July 2006, Film4 had likewise become free-to-air and restarted broadcasting on digital terrestrial.
Venturing into radio broadcasting, 2005 saw Channel 4 purchase 51% of shares in the now defunct Oneword radio station, with UBC Media holding on to the remaining shares. New programmes such as the weekly, half-hour "The Morning Report" news programme were among some of the new content Channel 4 provided for the station, with the name 4Radio being used. As of early 2009, however, Channel 4's future involvement in radio remained uncertain.
Since 2006.
Before the digital switchover, Channel 4 raised concerns over how it might finance its public service obligations afterward. In April 2006, it was announced that Channel 4's digital switch-over costs would be paid for by licence fee revenues.
In July 2007, Channel 4 paid £28million for a 50% stake in the TV business of British media company EMAP, which had seven music video channels. On 15 August 2008, 4Music was launched across the UK. Channel 4 announced interest in launching a high-definition version of Film4 on Freeview, to coincide with the launch of Channel 4 HD, but the fourth HD slot was given to Channel 5 instead.
On 2 November 2007, the station celebrated its 25th birthday. It showed the first episode of "Countdown", an anniversary "Countdown" special, and a special edition of "The Big Fat Quiz". It used the original multicoloured 1982–1996 blocks logo on presentation, and idents using the Fourscore jingle throughout the day.
In November 2009, Channel 4 launched a week of 3D television, broadcasting selected programmes each night using stereoscopic ColorCode 3D technology. The accompanying 3D glasses were distributed through Sainsbury's supermarkets.
On 29 September 2015, Channel 4 revamped its presentation for a fifth time; the new branding downplayed the "4" logo from most on-air usage, in favour of using the shapes from the logo in various forms. Four new idents were filmed by Jonathan Glazer, which featured the shapes in various real-world scenes depicting the "discovery" and "origins" of the shapes. The full logo was still occasionally used, but primarily for off-air marketing. Channel 4 also commissioned two new corporate typefaces, "Chadwick", and "Horseferry" (a variation of Chadwick with the aforementioned shapes incorporated into its letter forms), for use across promotional material and on-air.
In June 2017, it was announced that Alex Mahon would be the next chief executive, and would take over from David Abraham, who left in November 2017.
On 31 October 2017, Channel 4 introduced a new series of idents continuing the theme, this time depicting the logo shapes as having formed into an anthropomorphic "giant" character.
On 25 September 2021, Channel 4 and several of its sub-channels went off air after an incident at Red Bee Media's playout centre in west London. Channel 4, More4, Film4, E4, 4Music, The Box, Box Hits, Kiss, Magic and Kerrang! stopped transmitting, but 4seven was not impacted. The incident still affected a number of the channels by 30 September. The London Fire Brigade confirmed that a gas fire prevention system at the site had been activated, but firefighters found no sign of fire. Activation of the fire suppression system caused catastrophic damage to some systems, such as Channel 4's subtitles, signing, and audio description system. An emergency backup subtitling system also failed, leaving Channel 4 unable to provide access services to viewers. This situation was criticised by the National Deaf Children's Society, which complained to the broadcasting watchdog. A new subtitling, signing and audio description system had to be built from scratch. The service eventually began to return at the end of October. In June 2022 after a six-month long investigation, Ofcom found that Channel 4 had breached its broadcast licence conditions on two grounds: Missing its subtitles quota on Freesat for 2021 and failure to effectively communicate with affected audiences.
On 23 December 2021, Jon Snow presented "Channel 4 News" for the last time, after 32 years as a main presenter on the programme, making Snow one of the UK's longest-serving presenters on a national news programme.
In April 2025, it was announced that Alex Mahon would step down as chief executive (CEO) of Channel 4 in the summer of that year. She was succeeded on an interim basis by Jonathan Allan, the broadcaster's chief operating officer, while a search for a permanent replacement was launched.
Abandoned privatisation.
Channel 4's parent company, Channel Four Television Corporation, was considered for privatisation by the governments of Margaret Thatcher, John Major and Tony Blair. In 2014, the Cameron-Clegg coalition government drew up proposals to privatise the corporation but the sale was blocked by the Liberal Democrat Business Secretary Vince Cable. In 2016, the future of the channel was again being looked into by the government, with analysts suggesting several options for its future. In June 2021, the government of Boris Johnson was considering selling the channel.
In April 2022, the Department for Culture, Media and Sport acknowledged that ministerial discussions were taking place regarding the sale of Channel Four Television Corporation. The channel's chief executive, Alex Mahon, expressed disappointment at this, saying that its vision for the future was "rooted in continued public ownership".
In January 2023, Michelle Donelan confirmed that the plans to sell Channel 4 were scrapped and that it would remain in public ownership for the foreseeable future.
Public service remit.
Channel 4 was established with, and continues to hold, a remit of public service obligations which it must fulfil. The remit changes periodically, as dictated by various broadcasting and communications acts, and is regulated by the various authorities Channel 4 has been answerable to; originally the IBA, then the ITC and now Ofcom.
The preamble of the remit as per the Communications Act 2003 states that:
The remit also involves an obligation to provide programming for schools, and a substantial amount of programming produced outside of Greater London.
Carriage.
Channel 4 was carried from its beginning on analogue terrestrial, the standard means of television broadcast in the United Kingdom. It continued to be broadcast through these means until the changeover to digital terrestrial television in the United Kingdom was complete. Since 1998, it has been universally available on digital terrestrial, and the Sky platform (initially encrypted, though encryption was dropped on 14 April 2008 and is now free of charge and available on the Freesat platform) as well as having been available from various times in various areas, on analogue and digital cable networks.
Due to its special status as a public service broadcaster with a specific remit, it is afforded free carriage on the terrestrial platforms, in contrast with other broadcasters such as ITV.
Channel 4 is available outside the United Kingdom; it is widely available in the Republic of Ireland, the Netherlands, Belgium and Switzerland. The channel is registered to broadcast within the European Union/EEA through the Luxembourg Broadcasting Regulator (ALIA).
Since 2019, it has been offered by British Forces Broadcasting Service (BFBS) to members of the British Armed Forces and their families around the world, BFBS Extra having previously carried a selection of Channel 4 programmes.
The Channel 4 website allows people in the United Kingdom to watch Channel 4 live. Previously, some programmes (mostly international imports) were not shown. It was previously carried by Zattoo until the operator removed the channel from its platform.
Channel 4 also makes some of its programming available "on demand" via cable and the internet through the Channel 4 VoD service.
Funding.
During its first decade, Channel 4 was funded by subscriptions collected by the IBA from the ITV regional companies, in return for which each company had the right to sell advertisements on the fourth channel in its own region and keep the proceeds. This meant that ITV and Channel 4 were not in competition with each other, and often promoted each other's programmes.
A change in funding came about under the Broadcasting Act 1990 when the new corporation was afforded the ability to fund itself. Originally this arrangement left a "safety net" guaranteed minimum income should the revenue fall too low, funded by large insurance payments made to the ITV companies. Such a subsidy was never required, however, and these premiums were phased out by the government in 1998. After the link with ITV was cut, the cross-promotion which had existed between ITV and Channel 4 also ended.
In 2007, owing to severe funding difficulties, the channel sought government help and was granted a payment of £14 million over a six-year period. The money was to have come from the television licence fee, and would have been the first time that money from the licence fee had been given to any broadcaster other than the BBC. However, the plan was scrapped by the Secretary of State for Culture, Media and Sport, Andy Burnham, ahead of "broader decisions about the future framework of public service broadcasting". The broadcasting regulator Ofcom released its review in January 2009 in which it suggested that Channel 4 would preferably be funded by "partnerships, joint ventures or mergers".
, it breaks even in much the same way as most privately run commercial stations through the sale of on-air advertising, programme sponsorship, and the sale of any programme content and merchandising rights it owns, such as overseas broadcasting rights and domestic video sales. For example, its total revenues were £925 million with 91% derived from sale of advertising. It also has the ability to subsidise the main network through any profits made on the corporation's other endeavours, which have in the past included subscription fees from stations such as E4 and Film4 (now no longer subscription services) and its "video-on-demand" sales. In practice, however, these other activities are loss-making, and are subsidised by the main network. According to Channel 4's published accounts, for 2005 the extent of this cross-subsidy was some £30 million.
Programming.
Channel 4 is a "publisher-broadcaster", meaning that it commissions or "buys" all of its programming from companies independent of itself. It was the first UK broadcaster to do so on a significant scale; such commissioning is a stipulation which is included in its licence to broadcast. In consequence, numerous independent production companies emerged, though external commissioning on the BBC and in ITV (where a quota of 25% minimum of total output has been imposed since the Broadcasting Act 1990 came into force) has become regular practice, as well as on the numerous stations that launched later. Although it was the first British broadcaster to commission all of its programmes from third parties, Channel 4 was the last terrestrial broadcaster to outsource its transmission and playout operations (to Red Bee Media), after 25 years in-house.
The requirement to obtain all content externally is stipulated in its licence. Additionally, Channel 4 also began a trend of owning the copyright and distribution rights of the programmes it aired, in a manner that is similar to the major Hollywood studios' ownership of television programmes that they did not directly produce. Thus, although Channel 4 does not produce programmes, many are seen as belonging to it.
It was established with a specific intention of providing programming to groups of minority interests, not catered for by its competitors, which at the time were only the BBC and ITV.
Channel 4 also pioneered the concept of 'stranded programming', where seasons of programmes following a common theme would be aired and promoted together. Some would be very specific, and run for a fixed period of time; the "4 Mation" season, for example, showed innovative animation. Other, less specific strands, were (and still are) run regularly, such as "T4", a strand of programming aimed at teenagers, on weekend mornings (and weekdays during school/college holidays); "Friday Night Comedy", a slot where the channel would pioneer its style of comedy commissions, "4Music" (now a separate channel) and "4Later", an eclectic collection of offbeat programmes transmitted in the early hours of the morning.
For a period in the mid-1980s, some sexually explicit arthouse films would be screened with a "red triangle" graphic in the upper right of the screen.
In recent years concerns have arisen regarding a number of programmes made for Channel 4, that are believed missing from all known archives.
Most watched programmes.
The following is a list of the 10 most watched shows on Channel 4 since launch, based on Live +28 data supplied by BARB, and archival data published by Channel 4.
Comedy.
During the station's early days, the screenings of innovative short one-off comedy films produced by a rotating line-up of alternative comedians went under the title of "The Comic Strip Presents". "The Optimist" was the world's first dialogue-free television comedy, and one of the channel's earliest commissioned programs. "The Tube" and "Saturday Live/Friday Night Live" also launched the careers of a number of comedians and writers. Channel 4 broadcast a number of popular American imports, including "Cheers", "The Cosby Show", "Roseanne", "Home Improvement", "Friends", "Sex and the City", "Everybody Loves Raymond", "South Park", "Family Guy", "Futurama", "Frasier", "Scrubs", and "Will & Grace". Other significant US acquisitions include "The Simpsons", for which the station was reported to have paid £700,000 per episode for the terrestrial television rights back in 2004, and continues to air on the channel on weekends.
In April 2010, Channel 4 became the first UK broadcaster to adapt the American comedy institution of roasting to British television, with "A Comedy Roast".
In 2010, Channel 4 organised "Channel 4's Comedy Gala", a comedy benefit show in aid of Great Ormond Street Children's Hospital. With over 25 comedians appearing, it billed it as "the biggest live stand up show in United Kingdom history". Filmed live on 30 March in front of 14,000 at The O2 Arena in London, it was broadcast on 5 April. This has continued to 2016.
In 2021, Channel 4 decided to revive The British Comedy Awards as part of its Stand Up To Cancer programming. The ceremony, billed as The National Comedy Awards was due to be held in the spring of 2021 but was delayed twice due to the Coronavirus pandemic and eventually held a year later.
Factual and current affairs.
Channel 4 has a strong reputation for history programmes and documentaries. Its news service "Channel 4 News" is supplied by ITN, whilst its long-standing investigative documentary series, "Dispatches", gains attention from other media outlets. Its live broadcast of the first public autopsy in the UK for 170 years, carried out by Gunther von Hagens in 2002 and the 2003 one-off stunt "Derren Brown Plays Russian Roulette Live" proved controversial.
A season of television programmes about masturbation, called "Wank Week", was to be broadcast in the United Kingdom by Channel 4 in March 2007. The series came under public attack from senior television figures, and was pulled amid claims of declining editorial standards and concern for the channel's public service broadcasting credentials.
FourDocs.
FourDocs was an online documentary site provided by Channel 4. It allowed viewers to upload their own documentaries to the site for others to view. It focused on documentaries of between 3 and 5 minutes. The website also included an archive of classic documentaries, interviews with documentary filmmakers and short educational guides to documentary-making. It won a Peabody Award in 2006. The site also included a strand for documentaries of under 59 seconds, called "Microdocs".
Schools programming.
Channel 4 is obliged to carry schools programming as part of its remit and licence.
ITV Schools on Channel 4.
Since 1957 ITV had produced schools programming, which became an obligation. In 1987, five years after the station was launched, the IBA afforded ITV free carriage of these programmes during Channel 4's then-unused weekday morning hours. This arrangement allowed the ITV companies to fulfil their obligation to provide schools programming, whilst allowing ITV itself to broadcast regular programmes complete with advertisements. During the times in which schools programmes were aired Central Television provided most of the continuity with play-out originating from Birmingham.
Channel 4 Schools/4Learning.
After the restructuring of the station in 1993, ITV's obligations to provide such programming on Channel 4's airtime passed to Channel 4 itself, and the new service became Channel 4 Schools, with the new corporation administering the service and commissioning its programmes, some still from ITV, others from independent producers.
In March 2008, the 4Learning interactive new media commission Slabovia.tv was launched. The Slabplayer online media player showing TV shows for teenagers was launched on 26 May 2008.
The schools programming has always had elements which differ from its normal presentational package. In 1993, the Channel 4 Schools idents featured famous people in one category, with light shining on them in front of an industrial-looking setting supplemented by instrumental calming music. This changed in 1996 with the circles look to numerous children touching the screen, forming circles of information then picked up by other children. The last child would produce the Channel 4 logo in the form of three vertical circles, with another in the middle and to the left containing the Channel 4 logo.
Religious programmes.
From the outset, Channel 4 did not conform to the expectations of conventional religious broadcasting in the UK. John Ranelagh, first commissioning editor for religion, made his priority 'broadening the spectrum of religious programming' and more 'intellectual' concerns. He also ignored the religious programme advisory structure that had been put in place by the BBC, and subsequently adopted by ITV. Ranelagh's first major commission caused a furore, a three-part documentary series called "". The programmes, transmitted during the Easter period of 1984, seemed to advocate the idea that the Gospels were unreliable, Jesus may have indulged in witchcraft, and that he may not have even existed. The series triggered a public outcry, and marked a significant moment in the deterioration in the relationship between the UK's broadcasting and religious institutions.
Film.
Numerous genres of film-making – such as comedy, drama, documentary, adventure/action, romance and horror/thriller – are represented in the channel's schedule. From the launch of Channel 4 until 1998, film presentations on C4 would often be broadcast under the "Film on Four" banner.
In March 2005, Channel 4 screened the uncut Lars von Trier film "The Idiots", which includes unsimulated sexual intercourse, making it the first UK terrestrial channel to do so. The channel had previously screened other films with similar material but censored and with warnings.
Since 1 November 1998, Channel 4 has had a digital subsidiary channel dedicated to the screening of films. This channel launched as a paid subscription channel under the name "FilmFour", and was relaunched in July 2006 as a free-to-air channel under the current name of "Film4". The Film4 channel carries a wide range of film productions, including acquired and Film4-produced projects. Channel 4's general entertainment channels E4 and More4 also screen feature films at certain points in the schedule as part of their content mix.
Global warming.
On 8 March 2007, Channel 4 screened a documentary, "The Great Global Warming Swindle" stating that global warming is "a lie" and "the biggest scam of modern times". The programme's accuracy were disputed on multiple points, and commentators criticised it for being one-sided, observing that the mainstream position on global warming is supported by the scientific academies of the major industrialised nations. There were 246 complaints to Ofcom as of 25 April 2007, including allegations that the programme falsified data. The programme was criticised by scientists and scientific organisations, and various scientists who participated in the documentary claimed their views had been distorted.
"Against Nature": An earlier controversial Channel 4 programme made by Martin Durkin which was also critical of the environmental movement and was charged by the UK's Independent Television Commission for misrepresenting and distorting the views of interviewees by selective editing.
"The Greenhouse Conspiracy": An earlier Channel 4 documentary broadcast on 12 August 1990, as part of the "Equinox" series, in which similar claims were made. Three of the people interviewed (Lindzen, Michaels and Spencer) were also interviewed in "The Great Global Warming Swindle".
Ahmadinejad's Christmas speech.
In the "Alternative Christmas address" of 2008, a Channel 4 tradition since 1993 with a different presenter each year, Iranian President Mahmoud Ahmadinejad made a thinly veiled attack on the United States by claiming that Christ would have been against "bullying, ill-tempered and expansionist powers".
The broadcast was rebuked by human rights activists, politicians and religious figures, including Peter Tatchell, Louise Ellman, Ron Prosor and Rabbi Aaron Goldstein. A spokeswoman for the Foreign and Commonwealth Office said: "President Ahmadinejad has, during his time in office, made a series of appalling anti-Semitic statements. The British media are rightly free to make their own editorial choices, but this invitation will cause offence and bemusement not just at home but among friendly countries abroad".
However, Channel 4 was defended by Stonewall director Ben Summerskill who stated: "In spite of his ridiculous and often offensive views, it is an important way of reminding him that there are some countries where free speech is not repressed...If it serves that purpose, then Channel 4 will have done a significant public service". Dorothy Byrne, Channel 4's head of news and current affairs, said in response to the station's critics: "As the leader of one of the most powerful states in the Middle East, President Ahmadinejad's views are enormously influential... As we approach a critical time in international relations, we are offering our viewers an insight into an alternative world view...Channel 4 has devoted more airtime to examining Iran than any other broadcaster and this message continues a long tradition of offering a different perspective on the world around us".
4Talent.
4Talent is an editorial branch of Channel 4's commissioning wing, which co-ordinates Channel 4's various talent development schemes for film, television, radio, new media and other platforms and provides a showcasing platform for new talent.
There are bases in London, Birmingham, Glasgow and Belfast, serving editorial hubs known respectively as 4Talent National, 4Talent Central England, 4Talent Scotland and 4Talent Northern Ireland. These four sites include features, profiles and interviews in text, audio and video formats, divided into five zones: TV, Film, Radio, New Media and Extras, which covers other arts such as theatre, music and design. 4Talent also collates networking, showcasing and professional development opportunities, and runs workshops, masterclasses, seminars and showcasing events across the UK.
"4Talent Magazine".
"4Talent Magazine" is the creative industries magazine from 4Talent, which launched in 2005 as "TEN4" magazine under the editorship of Dan Jones. "4Talent Magazine" is currently edited by Nick Carson. Other staff include deputy editor Catherine Bray and production editor Helen Byrne. The magazine covers rising and established figures of interest in the creative industries, a remit including film, radio, TV, comedy, music, new media and design.
Subjects are usually UK-based, with contributing editors based in Northern Ireland, Scotland, London and Birmingham, but the publication has been known to source international content from Australia, America, continental Europe and the Middle East. The magazine is frequently organised around a theme for the issue, for instance giving half of November 2007's pages over to profiling winners of the annual 4Talent Awards.
An unusual feature of the magazine's credits is the equal prominence given to the names of writers, photographers, designers and illustrators, contradicting standard industry practice of more prominent writer bylines. It is also recognisable for its 'wraparound' covers, which use the front and back as a continuous canvas – often produced by guest artists.
Although "4Talent Magazine" is technically a newsstand title, a significant proportion of its readers are subscribers. It started life as a quarterly 100-page title, but has since doubled in size and is now published bi-annually.
Scheduling.
Since the 2010s, Channel 4 has become the public service broadcaster most likely to amend its schedule at short notice, if programmes are not gaining sufficient viewers in their intended slots. Programmes which have been heavily promoted by the channel before launch and then have lost their slot a week later include "Sixteen: Class of 2021". This was a fly-on-the-wall school documentary which lost its prime 9pm slot after one episode on 31 August 2021, even after a four-star review in "The Guardian". Channel 4 moved the next episode to a late night (post-primetime) slot on a different day and continued to broadcast the remainder of the four-part series in this timeslot.
Also in 2021, the channel launched "Epic Wales: Valleys, Mountains and Coast", a version of its More4 documentaries "The Pennines: Backbone of Britain", "The Yorkshire Dales and The Lakes" and "Devon and Cornwall". set in Wales. "Epic Wales: Valleys, Mountains and Coast". was initially broadcast in a prime Friday night slot at 8pm, in the hour before its comedy shows, but was dropped by the channel before the series was completed and replaced by repeats. In February 2022, the channel scheduled a new version of the show under the title "Wonderous Wales" with a Saturday night slot at 8pm but after one episode, it decided to take this series out of its schedule, moving up a repeat of "Matt Baker: Our Farm in the Dales" to 8pm and putting an episode of "Escape to the Chateau" in Baker's slot at 7pm.
Other programmes moved out of primetime in 2022, include "Mega Mansion Hunters", Channel 4's answer to "Selling Sunset", which saw its third and final episode moved past midnight with repeats put in the schedule before it, and "Richard Hammond's Crazy Contraptions", a primetime Friday night competitive engineering show which saw its grand final moved to 11pm on a Sunday night. Instead of Hammond's competition, Channel 4 decided to schedule the fifth series of "Devon and Cornwall" in its place at 8pm on Friday nights, with this documentary being put up against Channel 5's "World's Most Scenic Railway Journeys" in the same timeslot.
A new series of "Unreported World" was due to start on 18 February 2022 with a report by Seyi Rhodes in South Sudan, but was dropped due to an extended storm report on "Channel 4 News". When the programme was rescheduled for following Fridays, it was dropped again as "Channel 4 News" was extended due to the 2022 Russian invasion of Ukraine. "Winter Paralympics: Today in Beijing" was due to take the "Unreported World" slot from 11 March 2022 though this sports programme also stood a chance of being moved around the schedule to continue the extended news programmes reporting on the conflict. The invasion of Ukraine has also prompted Channel 4 to acquire and schedule the comedy series "Servant of the People" as a last minute replacement. The programme stars the current President of Ukraine Volodymyr Zelenskyy as an ordinary man who gets elected to run the country, and was shown on 6 March 2022 along with the documentary "Zelenskyy: The Man Who Took on Putin".
In addition to these shows, O.T. Fagbenle's sitcom "Maxxx" was pulled from youth TV channel E4, after one episode from the series had been broadcast on 2 April 2020, with Channel 4 deciding to keep the series off-air until Black History Month, with the series going out on the main channel from October 2020.
In May 2022, the reality dating show "Let's Make a Love Scene" was scrapped after one episode with the second programme in the series, hosted by Ellie Taylor, pulled from the 20 May schedule and replaced with an episode of "8 Out of 10 Cats Does Countdown". The first edition was negatively received, with Anita Singh, the arts and entertainments editor for "The Telegraph" writing that the show was "the most ill-conceived programme idea since Prince Edward dreamt up "It's a Royal Knockout"".
Presentation.
Since its launch in 1982, Channel 4 has used the same logo which consists of a stylised numeral "4" made up of nine differently-shaped blocks.
The original version was designed by Martin Lambie-Nairn and his partner Colin Robinson and was the first UK channel ident made using advanced computer generation (the first electronically generated ident was on BBC2 in 1979, but this was two-dimensional). It was designed in conjunction with Bo Gehring Aviation of Los Angeles and originally depicted the "4" in red, yellow, green, blue and purple. The music accompanying the ident was called "Fourscore" and was composed by David Dundas; it was later released as a single alongside a B-side, "Fourscore Two", although neither reached the UK charts. In November 1992, "Fourscore" was replaced by new music.
In 1996, Channel 4 commissioned Tomato Films to revamp the "4", which resulted in the "Circles" idents showing four white circles forming up transparently over various scenes, with the "4" logo depicted in white in one of the circles.
In 1999, Spin redesigned the logo to feature in a single square that sat on the right-hand side of the screen, whilst various stripes would move along from left to right, often lighting the squared "4" up. Like the previous "Circles" idents from 1996 (which was made by Tomato Films), the stripes would be interspersed with various scenes potentially related to the upcoming programme.
The logo was made three-dimensional again in 2004 when it was depicted in filmed scenes that show the blocks forming the "4" logo for less than a second before the action moves away again.
In 2015, a new presentation package by the network's in-house agency 4Creative was introduced. Directed by filmmaker Jonathan Glazer, the "4" logo itself was downplayed on-air in favour of idents and bumpers featuring the individual blocks as objects, including idents depicting them as "Kryptonite"-like items of fascination (such as being excavated, and viewed under a microscope for scientific study) that reflect Channel 4's remit of being "irreverent, innovative, alternative and challenging". Musician Micachu composed music for the idents. This theme continued in 2017, with new idents by Dougal Wilson that focused on an anthropomorphic "giant" constructed from the blocks, and its interactions in everyday life. A new acoustic rendition of "Fourscore" was also composed for the idents.
In September 2018, all of Channel 4's digital channels underwent a rebranding by ManvsMachine and 4Creative, including new logos that incorporate variants of the Lambie-Nairn "4". The rebranding was intended to give Channel 4's family of services a more uniform brand identity, while still allowing room for individualized elements that reflect their positioning and programming.
The original 1982 ident was given a one-off revival on 28 December 2020, as a tribute to Lambie-Nairn after his death three days earlier. It was also used on 22 January 2021 as part of the 80s-themed "takeover" to promote the premiere of "It's a Sin", which was set during the 1980s AIDS crisis.
To mark the network's 40th anniversary, Channel 4 began to phase in another rebranding in November 2022, and announced that new idents were being produced that would be "an unexpected and daring portrait of Britain retold". In an effort to emphasise its digital platforms, it was announced that the "All4" branding would be dropped from Channel 4's video on-demand platform, in favour of marketing it under the "Channel 4" name with no disambiguation. The new idents, "Modern Britain", premiered in June 2023, featuring looping cycles of themed scenes built around the Channel 4 logo by various artists.
Regions/international.
Regions.
Channel 4 has, since its inception, broadcast identical programmes and continuity throughout the United Kingdom (excluding Wales where it did not operate on analogue transmitters). At launch this made it unique, as both the BBC and ITV had long-established traditions of providing regional variations in their programming in different areas of the country. Since the launch of subsequent British television channels, Channel 4 has become typical in its lack of regional programming variations.
A few exceptions exist to this rule for programming and continuity:
Part of Channel 4's remit covers the commissioning of programmes from outside London. Channel 4 has a dedicated director of nations and regions, Stuart Cosgrove, who is based in a regional office in Glasgow. As his job title suggests, it is his responsibility to foster relations with independent producers based in areas of the United Kingdom (including Wales) outside London.
International.
Channel 4 is available in the Republic of Ireland, with ads tailored to the Irish market. The channel is registered with the broadcasting regulators in Luxembourg for terms of conduct and business within the EU/EEA while observing guidelines outlined by Ireland's BAI code. Irish advertising sales are managed by Media Link in Dublin. Where Channel 4 does not hold broadcasting rights within the Republic of Ireland such programming is unavailable. For example, the series "Glee" was not available on Channel 4 on Sky in Ireland due to it broadcasting on TV3 within Ireland. Currently, programming available on Channel 4 is available within the Republic of Ireland without restrictions. Elsewhere in Europe, the UK version of the channel is available.
Future possibility of regional news.
With ITV plc pushing for much looser requirements on the amount of regional news and other programming it is obliged to broadcast in its ITV regions, the idea of Channel 4 taking on a regional news commitment has been considered, with the corporation in talks with Ofcom and ITV over the matter. Channel 4 believes that a scaling-back of such operations on ITV's part would be detrimental to Channel 4's national news operation, which shares much of its resources with ITV through their shared news contractor ITN. At the same time, Channel 4 also believes that such an additional public service commitment would bode well in on-going negotiations with Ofcom in securing additional funding for its other public service commitments.
Channel 4 HD.
In mid-2006, Channel 4 ran a six-month closed trial of HDTV, as part of the wider Freeview HD experiment via the Crystal Palace transmitter to London and parts of the home counties, including the use of "Lost" and "Desperate Housewives" as part of the experiment, as US broadcasters such as ABC already have an HDTV back catalogue.
On 10 December 2007, Channel 4 launched a high-definition television simulcast of Channel 4 on Sky's digital satellite platform, after Sky agreed to contribute toward the channel's satellite distribution costs. It was the first full-time high-definition channel from a terrestrial UK broadcaster.
On 31 July 2009, Virgin Media added Channel 4 HD on channel 146 (later on channel 142, now on channel 141) as part of the M pack. On 25 March 2010, Channel 4 HD appeared on Freeview channel 52 with a placeholding caption, ahead of a commercial launch on 30 March 2010, coinciding with the commercial launch of Freeview HD. On 19 April 2011, Channel 4 HD was added to Freesat on channel 126. As a consequence, the channel moved from being free-to-view to free-to-air on satellite during March 2011. With the closure of S4C Clirlun in Wales on 1 December 2012, on Freeview, Channel 4 HD launched in Wales on 2 December 2012.
The channel carries the same schedule as Channel 4, broadcasting programmes in HD when available, acting as a simulcast. Therefore, SD programming is broadcast upscaled to HD. The first true HD programme to be shown was the 1996 Adam Sandler film "Happy Gilmore". From launch until 2016 the presence of the 4HD logo on screen denoted true HD content.
On 1 July 2014, Channel 4 +1 HD, an HD simulcast of Channel 4 +1, launched on Freeview channel 110. It closed on 22 June 2020 to help make room on COM7 following the closure of COM8 on Freeview. 4Seven HD were removed from Freeview also.
On 20 February 2018, Channel 4 announced that Channel 4 HD and All 4 would no longer be supplied on Freesat from 22 February 2018. Channel 4 HD returned to the platform on 8 December 2021, along with the music channel portfolio of The Box Plus Network.
On 27 September 2022, the other 6 advertising regions of Channel 4 (South, Midlands, North, Scotland, Northern Ireland and Rep of Ireland) were made available in HD on Sky and Virgin Media. Prior to this, Channel 4 HD was only available in the London advertising region.
Video on demand.
Channel 4's video on demand service, known simply as "Channel 4" since April 2023, launched in November 2006 as "4oD", and was renamed "All 4" in March 2015. The service offers a variety of programmes recently shown on Channel 4, E4, More4 or from their archives, though some programmes and movies are not available due to rights issues.
Teletext services.
4-Tel/FourText.
Channel 4 originally licensed an ancillary teletext service to provide schedules, programme information and features. The original service was called 4-Tel, and was produced by Intelfax, a company set up especially for the purpose. It was carried in the 400s on Oracle. In 1993, with Oracle losing its franchise to Teletext Ltd, 4-Tel found a new home in the 300s, and had its name shown in the header row. Intelfax continued to produce the service and in 2002 it was renamed FourText.
Teletext on 4.
In 2003, Channel 4 awarded Teletext Ltd a ten-year contract to run the channel's ancillary teletext service, named Teletext on 4. The service closed in 2008, and Teletext is no longer available on Channel 4, ITV and Channel 5.
|
6322
|
45806405
|
https://en.wikipedia.org/wiki?curid=6322
|
Carolina parakeet
|
The Carolina parakeet (Conuropsis carolinensis), or Carolina conure, is an extinct species of small green neotropical parrot with a bright yellow head, reddish orange face, and pale beak that was native to the Eastern, Midwest, and Plains states of the United States. It was the only indigenous parrot within its range, and one of only three parrot species native to the United States. The others are the thick-billed parrot, now extirpated, and the green parakeet, still present in Texas; a fourth parrot species, the red-crowned amazon, is debated.
The Carolina parakeet was called "puzzi la née" ("head of yellow") or "pot pot chee" by the Seminole and "kelinky" in Chickasaw. Though formerly prevalent within its range, the bird had become rare by the middle of the 19th century. The last confirmed sighting in the wild was of the "C. c. ludovicianus" subspecies in 1910. The last known specimen, a male named Incas, perished in captivity at the Cincinnati Zoo in 1918, and the species was declared extinct in 1939.
The earliest reference to these parrots was in 1583 in Florida reported by Sir George Peckham in "A True Report of the Late Discoveries of the Newfound Lands" of expeditions conducted by English explorer Sir Humphrey Gilbert, who notes that explorers in North America "doe testifie that they have found in those countryes; ... parrots". They were first scientifically described in English naturalist Mark Catesby's two-volume "Natural History of Carolina, Florida and the Bahama Islands" published in London in 1731 and 1743.
Carolina parakeets were probably poisonous – French-American naturalist and painter John J. Audubon noted that cats apparently died from eating them, and they are known to have eaten the toxic seeds of cockleburs.
Taxonomy.
"Carolinensis" is a species of the genus "Conuropsis", one of numerous genera of New World Neotropical parrots in family Psittacidae of true parrots.
The binomial "Psittacus carolinensis" was assigned by Swedish zoologist Carl Linnaeus in the 10th edition of "Systema Naturae" published in 1758. The species was given its own genus, "Conuropsis", by Italian zoologist and ornithologist Tommaso Salvadori in 1891 in his "Catalogue of the Birds in the British Museum", volume 20. The name is derived from the Greek-ified "conure" ("parrot of the genus "Conurus"" an obsolete name of genus "Aratinga") + "-opsis" ("likeness of") and Latinized "Carolina" (from Carolana, an English colonial province) + "-ensis" (of or "from a place"), therefore a bird "like a conure from Carolina".
Two subspecies are recognized: The Louisiana subspecies of the Carolina parakeet, "C. c. ludovicianus", was slightly different in color from the nominate subspecies, being more bluish-green and generally of a somewhat subdued coloration, and became extinct in much the same way, but at a somewhat earlier date (early 1910s). The Appalachian Mountains separated these birds from the eastern "C. c. carolinensis".
Evolution.
According to a study of mitochondrial DNA recovered from museum specimens, their closest living relatives include some of the South American "Aratinga" parakeets: The Nanday parakeet, the sun conure, and the golden-capped parakeet. The authors note the bright yellow and orange plumage and blue wing feathers found in "C. carolinensis" are traits shared by another species, the jandaya parakeet ("A. jandaya"), that was not sampled in the study, but is generally thought to be closely related. To help resolve the divergence time, a whole genome of a preserved specimen has now been sequenced. The Carolina parakeet colonized North America about 5.5 million years ago. This was well before North America and South America were joined by the formation of the Panama land bridge about 3.5 mya. Since the Carolina parakeets' more distant relations are geographically closer to its own historic range while its closest relatives are more geographically distant to it, these data are consistent with the generally accepted hypothesis that Central and North America were colonized at different times by distinct lineages of parrots – parrots that originally invaded South America from Antarctica some time after the breakup of Gondwana, where Neotropical parrots originated approximately 50 mya.
The following cladogram shows the placement of the Carolina parakeet among its closest relatives, after a DNA study by Kirchman "et al". (2012):
A fossil parrot, designated "Conuropsis fratercula", was described based on a single humerus from the Miocene Sheep Creek Formation (possibly late Hemingfordian, c. 16 mya, possibly later) of Snake River, Nebraska. It was a smaller bird, three-quarters the size of the Carolina parakeet. "The present "species" is of peculiar interest as it represents the first known parrot-like bird to be described as a fossil from North America." (Wetmore 1926; italics added) However, it is not completely certain that the species is correctly assigned to "Conuropsis".
Description.
The Carolina parakeet was a small, green parrot very similar in size and coloration to the extant jenday parakeet and sun conure – the sun conure being its closest living relative.
The majority of the parakeets' plumage was green with lighter green underparts, a bright yellow head and orange forehead and face extending to behind the eyes and upper cheeks (lores). The shoulders were yellow, continuing down the outer edge of the wings. The primary feathers were mostly green, but with yellow edges on the outer primaries. Thighs were green towards the top and yellow towards the feet. Male and female adults were identical in plumage, however males were slightly larger than females (sexually dimorphic only in size). Their legs and feet were light brown. They share the zygodactyl feet common to all the parrot family. Their eyes were ringed by white skin and their beaks were pale flesh colored. These birds weigh about 3.5 oz., are 13 in. long, and have wingspans of 21–23 in.
Young Carolina parakeets differed slightly in coloration from adults. The face and entire body were green, with paler underparts. They lacked yellow or orange plumage on the face, wings, and thighs. Hatchlings were covered in mouse-gray down, until about 39–40 days old, when green wings and tails appeared. Fledglings had full adult plumage around 1 year of age.
These birds were fairly long-lived, at least in captivity: A pair was kept at the Cincinnati Zoo for over 35 years.
Distribution and habitat.
The Carolina parakeet had the northernmost range of any known parrot. It was found from southern New York and Wisconsin to Kentucky, Tennessee, and the Gulf of Mexico, from the Atlantic Seaboard to as far west as eastern Colorado. It lived in old-growth forests along rivers and in swamps. Its range was described by early explorers thus: the 43rd parallel as the northern limit, the 26th as the most southern, the 73rd and 106th meridians as the eastern and western boundaries, respectively, the range included all or portions of at least 28 states. Its habitats were old-growth wetland forests along rivers and in swamps, especially in the Mississippi-Missouri drainage basin with large hollow trees including cypress and sycamore to use as roosting and nesting sites.
Only very rough estimates of the birds' former prevalence can be made, with an estimated range of 20,000 to 2.5 million km2, and population density of 0.5 to 2.0 parrots per km2, population estimates range from tens of thousands to a few million birds (though the densest populations occurred in Florida covering 170,000 km2, so hundreds of thousands of the birds may have been in that state alone).
The species may have appeared as a very rare vagrant in places as far north as southern Ontario in Canada. A few bones, including a pygostyle found at the Calvert Site in southern Ontario, came from the Carolina parakeet. The possibility remains open that this specimen was taken there for ceremonial purposes.
Behavior and diet.
The bird lived in huge, noisy flocks of as many as 300 birds. It built its nest in a hollow tree, laying two to five (most accounts say two) round white eggs. Reportedly, multiple female parakeets could deposit their eggs into one nest, similar to nesting behavior described in the monk parakeet ("Myiopsitta monachus").
It mostly ate the seeds of forest trees and shrubs, including those of cypress, hackberry, beech, sycamore, elm, pine, maple, oak, and other plants such as thistles and sandspurs ("Cenchrus" species). It ate fruits, including apples, grapes, and figs (often from orchards by the time of its decline), and flower buds, and occasionally, insects. It was especially noted for its predilection for cockleburs ("Xanthium strumarium"), a plant which contains a toxic glucoside, and it was considered to be an agricultural pest of grain crops.
Extinction.
The last captive Carolina parakeet, Incas, died at the Cincinnati Zoo on February 21, 1918, in the same cage as Martha, the last passenger pigeon, which died in 1914. There are no scientific studies or surveys of this bird by American naturalists; most information about it is from anecdotal accounts and museum specimens, so details of its prevalence and decline are unverified or speculative.
Extensive accounts of the precolonial and early colonial have been given for prevalence of this bird. The existence of flocks of gregarious, very colorful and raucous parrots could hardly have gone unnoted by European explorers, as parrots were virtually unknown in seafaring European nations in the 16th and 17th centuries. Later accounts in the latter half of the 19th century onward noted the birds' sparseness and absence.
Genetic evidence suggests that while populations had been in decline since the last glacial maximum, the lack of evidence of inbreeding suggests that the birds declined very quickly.
The birds' range collapsed from east to west with settlement and clearing of the eastern and southern deciduous forests. John J. Audubon commented as early as 1832 on the decline of the birds. The bird was rarely reported outside Florida after 1860. The last reported sighting east of the Mississippi River (except Florida) was in 1878 in Kentucky. By the turn of the century, it was restricted to the swamps of central Florida. The last known wild specimen was killed in Okeechobee County, Florida, in 1904, and the last captive bird died at the Cincinnati Zoo on February 21, 1918. This was the male specimen, Incas, that died within a year of his mate, Lady Jane. Additional reports of the bird were made in Okeechobee County, Florida, until the late 1920s, but these are not supported by specimens. However two sets of eggs purportedly taken from active nests in 1927 are in the collection of the Florida Museum of Natural History, and genetic testing could prove the species was still breeding at that time. Not until 1939, however, did the American Ornithologists' Society declare the Carolina parakeet to be extinct. The IUCN has listed the species as extinct since 1920.
In 1937, three parakeets resembling this species were sighted and filmed in the Okefenokee Swamp of Georgia. However, the American Ornithologists' Union analyzed the film and concluded that they had probably filmed feral parakeets. A year later, in 1938, a flock of parakeets was apparently sighted by a group of experienced ornithologists in the swamps of the Santee River basin in South Carolina, but this sighting was doubted by most other ornithologists. The birds were never seen again after this sighting, and shortly after a portion of the area was destroyed to make way for power lines, making the species' continued existence unlikely.
About 720 skins and 16 skeletons are housed in museums around the world, and analyzable DNA has been extracted from them.
Reasons for extinction.
The evidence is indicative that humans had at least a contributory role in the extinction of the Carolina parakeet, through a variety of means. Chief was deforestation in the 18th and 19th centuries. Hunting played a significant role, both for decorative use of their colorful feathers, for example, adornment of women's hats, and for reduction of crop predation. This was partially offset by the recognition of their value in controlling invasive cockleburs. Minor roles were played by capture for the pet trade and, as noted in "Pacific Standard", by the introduction for crop pollination of European honeybees that competed for nest sites.
A factor that exacerbated their decline to extinction was the flocking behavior that led them to return to the vicinity of dead and dying birds (such as birds downed by hunting), enabling wholesale slaughter.
The final extinction of the species in the early years of the 20th century is somewhat of a mystery, as it happened so rapidly. Vigorous flocks with many juveniles and reproducing pairs were noted as late as 1896, and the birds were long-lived in captivity, but they had virtually disappeared by 1904. Sufficient nest sites remained intact, so deforestation was not the final cause. American ornithologist Noel F. Snyder speculates that the most likely cause seems to be that the birds succumbed to poultry disease, although no recent or historical records exist of New World parrot populations being afflicted by domestic poultry diseases. The modern poultry scourge Newcastle disease was not detected until 1926 in Indonesia, and only a subacute form of it was reported in the United States in 1938. Genetic research on samples did not show any significant presence of bird viruses (though this does not solely rule out disease).
|
6325
|
45032371
|
https://en.wikipedia.org/wiki?curid=6325
|
Church (building)
|
A church, church building, church house, or chapel is a building used for Christian worship services and Christian activities. The earliest identified Christian church is a house church founded between 233 AD and 256 AD. "Church" is also used to describe a body or assembly of Christian believers, while "the Church" may be used to refer to the worldwide Christian religious community as a whole.
In traditional Christian architecture, the plan view of a church often forms a Christian cross with the centre aisle and seating representing the vertical beam and the bema and altar forming the horizontal. Towers or domes may inspire contemplation of the heavens. Modern churches have a variety of architectural styles and layouts. Some buildings designed for other purposes have been converted to churches, while many original church buildings have been put to other uses. From the 11th to the 14th century, there had been a wave of church construction in Western Europe.
Many churches worldwide are of considerable historical, national, cultural, and architectural significance, with several included in the list of UNESCO World Heritage Sites.
Etymology.
The word "church" is derived from Old English , 'place of assemblage set aside for Christian worship', from the Common Germanic word "kirika". This was probably borrowed via Gothic from Ancient Greek , 'the Lord's (house)', from , 'ruler, lord'. in turn comes from the Indo-European root , meaning 'to spread out, to swell' (euphemistically: 'to prevail, to be strong').
The various forms of the cognates to "church" in various languages reflect the word's linguistic roots in Greek and Proto-Indo-European origins. For instance, in early Germanic languages such as Old High German, the word evolved into "kirihha", highlighting its spread through the Christianization of Germanic peoples. This etymological journey illustrates how the concept of a place of Christian worship was linguistically adapted as Christianity expanded across Europe. Additionally, the use of the word in early Christian communities emphasized the association of the building with its dedication to God.
The Greek , 'of the Lord', has been used of houses of Christian worship since , especially in the East, although it was less common in this sense than or .
History.
Churches have evolved from early house churches (pre-4th century) to grand basilicas after Christianity's legalization in 313 AD. The Romanesque period (10th–12th century) featured thick walls and round arches, while the Gothic style (12th–16th century) introduced pointed arches and flying buttresses for taller, light-filled structures. Later styles include Renaissance symmetry, Baroque ornamentation, and modernist minimalism.
Common church features include:
Modern churches blend tradition with function, incorporating minimalist designs and contemporary community spaces while preserving a sense of originality and faith.
Antiquity.
The earliest archeologically identified Christian church is a house church ("domus ecclesiae"), the Dura-Europos church, founded between 233 AD and 256 AD.
In the second half of the third century AD, the first purpose-built halls for Christian worship ("aula ecclesiae") began to be constructed. Many of these structures were destroyed during the Diocletianic Persecution in the early 4th century. Even larger and more elaborate churches began to appear during the reign of Emperor Constantine the Great.
Medieval times.
From the 11th through the 14th centuries, a wave of cathedral building and the construction of smaller parish churches occurred across Western Europe. Besides serving as a place of worship, the cathedral or parish church was frequently employed as a general gathering place by the communities in which they were located, hosting such events as guild meetings, banquets, mystery plays, and fairs. Church grounds and buildings were also used for the threshing and storage of grain.
Romanesque architecture.
Between 1000 and 1200, the Romanesque style became popular across Europe. The Romanesque style is defined by large and bulky edifices typically composed of simple, compact, sparsely decorated geometric structures. Frequent features of the Romanesque church include circular arches, round or octagonal towers, and cushion capitals on pillars. In the early Romanesque era, coffering on the ceiling was fashionable, while later in the same era, groined vaults gained popularity. Interiors widened, and the motifs of sculptures took on more epic traits and themes. Romanesque architects adopted many Roman or early Christian architectural ideas, such as a cruciform ground plan, as that of Angoulême Cathedral, and the basilica system of a nave with a central vessel and side aisles.
Gothic architecture.
The Gothic style emerged around 1140 in Île-de-France and subsequently spread throughout Europe. Gothic churches lost the compact qualities of the Romanesque era, and decorations often contained symbolic and allegorical features. The first pointed arches, rib vaults, and buttresses began to appear, all possessing geometric properties that reduced the need for large, rigid walls to ensure structural stability. This also permitted the size of windows to increase, producing brighter and lighter interiors. Nave ceilings rose, and pillars and steeples heightened. Many architects used these developments to push the limits of structural possibility – an inclination that resulted in the collapse of several towers whose designs had unwittingly exceeded the boundaries of soundness. In Germany, the Netherlands and Spain, it became popular to build hall churches, a style in which every vault would be built to the same height.
Gothic cathedrals were lavishly designed, as in the Romanesque era, and many share Romanesque traits. Bagneux Church, France (1170–1190) exhibited both styles - a Romanesque tower, and Gothic nave and choir. Several also exhibit unprecedented degrees of detail and complexity in decoration. Notre-Dame de Paris and Reims Cathedral in France, as well as the church of San Francesco d'Assisi in Palermo, Salisbury Cathedral and the wool churches in England, and Santhome Church in Chennai, India, show the elaborate stylings characteristic of Gothic cathedrals.
Some of the most well-known gothic churches remained unfinished for centuries after the style fell out of popularity. One such example is the construction of Cologne Cathedral, which began in 1248, was halted in 1473, and didn't resume until 1842.
Renaissance.
In the fifteenth and sixteenth centuries, the changes in ethics and society due to the Renaissance and the Reformation also influenced the building of churches. The common style was much like the Gothic style but simplified. The basilica was not the most popular type of church anymore, but instead, hall churches were built. Typical features are columns and classical capitals.
In Protestant churches, where the proclamation of God's Word is of particular importance, the visitor's line of sight is directed towards the pulpit.
Baroque architecture.
The Baroque style was first used in Italy around 1575. From there, it spread to the rest of Europe and the European colonies. The building industry increased heavily during the Baroque era. Buildings, even churches, were used to indicate wealth, authority, and influence. The use of forms known from the Renaissance was extremely exaggerated. Domes and capitals were decorated with moulding, and the former stucco sculptures were replaced by fresco paintings on the ceilings. For the first time, churches were seen as one connected work of art, and consistent artistic concepts were developed. Instead of long buildings, more central-plan buildings were created. The sprawling decoration with floral ornamentation and mythological motives lasted until about 1720, in the Rococo era.
The Protestant parishes preferred Protestant churches often prioritize proximity between worshippers, the nave (main worship space), and the altar (often called a communion table). This is achieved through various architectural designs and practices, including moving the altar loser to the congregation, decreasing the distance between the entrance and altar, and employing simpler architectural styles that focus attention on the pulpit and communion table.
Architecture.
A common trait of the architecture of many churches is the shape of a cross (a long central rectangle, with side rectangles and a rectangle in front for the altar space or sanctuary). These churches also often have a dome or other large vaulted space in the interior to represent or draw attention to the heavens. Other common shapes for churches include a circle, to represent eternity, or an octagon or similar star shape, to represent the church's bringing light to the world. Another common feature is the spire, a tall tower at the "west" end of the church or over the crossing.
Another common feature of many Christian churches is the eastwards orientation of the front altar.
Often, the altar will not be oriented due east but toward the sunrise. This tradition originated in Byzantium in the fourth century and became prevalent in the West in the eighth and ninth centuries. The old Roman custom of having the altar at the west end and the entrance at the east was sometimes followed as late as the eleventh century, even in areas of northern Europe under Frankish rule, as seen in Petershausen (Constance), Bamberg Cathedral, Augsburg Cathedral, Regensburg Cathedral, and Hildesheim Cathedral.
Types.
Basilica.
The Latin word "basilica" was initially used to describe a Roman public building usually located in the forum of a Roman town. After the Roman Empire became officially Christian, the term came by extension to refer to a large and influential church that has been given special ceremonial rights by the Pope. The word thus retains two senses today, one architectural and the other ecclesiastical.
Cathedral.
A cathedral is a church, usually Catholic, Anglican, Oriental Orthodox or Eastern Orthodox, housing the seat of a bishop. The word cathedral takes its name from "cathedra", or Bishop's Throne (In ). The term is sometimes (improperly) used to refer to any church of great size.
A church with a cathedral function is not necessarily a large building. It might be as small as Christ Church Cathedral in Oxford, England, Porvoo Cathedral in Porvoo, Finland, Sacred Heart Cathedral in Raleigh, United States, or Chur Cathedral in Switzerland. However, frequently, the cathedral, along with some of the abbey churches, was the largest building in any region.
Cathedrals tend to display a higher level of contemporary architectural style and the work of accomplished craftsmen, and occupy a status both ecclesiastical and social that an ordinary parish church rarely has. Such churches are generally among the finest buildings locally and a source of national and regional pride, and many are among the world's most renowned works of architecture.
Chapel.
Either, a discrete space with an altar inside a larger cathedral, conventual, parish, or other church; or, a free standing small church building or room not connected to a larger church, to serve a particular hospital, school, university, prison, private household, palace, castle, or other institution. Often proprietary churches and small conventual churches are referred to by this term.
Collegiate church.
A collegiate church is a church where the daily office of worship is maintained by a college of canons, which may be presided over by a dean or provost.
Collegiate churches were often supported by extensive lands held by the church, or by tithe income from appropriated benefices. They commonly provide distinct spaces for congregational worship and for the choir offices of their clerical community.
Conventual church.
A conventual church (in Eastern Orthodoxy "katholikon") is the main church in a Christian monastery or convent, known variously as an abbey, a priory, a friary, or a preceptory.
Parish church.
A parish church is a church built to meet the needs of people localised in a geographical area called a parish. The vast majority of Catholic, Orthodox, Anglican, and Lutheran church buildings fall into this category. A parish church may also be a basilica, a cathedral, a conventual or collegiate church, or a place of pilgrimage. The vast majority of parish churches do not however enjoy such privileges.
In addition to a parish church, each parish may maintain auxiliary organizations and their facilities such as a rectory, parish hall, parochial school, or convent, frequently located on the same campus or adjacent to the church.
Pilgrimage church.
A pilgrimage church is a church to which pilgrimages are regularly made, or a church along a pilgrimage route, often located at the tomb of a saints, or holding icons or relics to which miraculous properties are ascribed, the site of Marian apparitions, etc.
Proprietary church.
During the Middle Ages, a proprietary church was a church, abbey, or cloister built on the private grounds of a feudal lord, over which he retained proprietary interests.
Evangelical church structures.
The architecture of evangelical places of worship is mainly characterized by its sobriety. The Latin cross is a well known Christian symbol that can usually be seen on the building of an evangelical church and that identifies the place's belonging. Some services take place in theaters, schools or multipurpose rooms, rented for Sunday only. There is usually a baptistery at the front of the church (in what is known as the chancel in historic traditions) or in a separate room for baptisms by immersion.
Worship services take on impressive proportions in the megachurches (churches where more than 2,000 people gather every Sunday). In some of these megachurches, more than 10,000 people gather every Sunday. The term gigachurch is sometimes used. For example, Lakewood Church (United States) or Yoido Full Gospel Church (South Korea).
House church.
In some countries of the world which apply sharia or communism, government authorizations for worship are complex for Christians. Because of persecution of Christians, Evangelical house churches have thus developed. For example, there is the Evangelical house churches in China movement. The meetings thus take place in private houses, in secret and in "illegality".
Alternative buildings.
Old and disused church buildings can be seen as an interesting proposition for developers as the architecture and location often provide for attractive homes or city centre entertainment venues. On the other hand, many newer churches have decided to host meetings in public buildings such as schools, universities, cinemas or theatres.
There is another trend to convert old buildings for worship rather than face the construction costs and planning difficulties of a new build. Unusual venues in the UK include a former tram power station, a former bus garage, a former cinema and bingo hall, a former Territorial Army drill hall, and a former synagogue. served as a floating church for mariners at Liverpool from 1827 until she sank in 1872. A windmill has also been converted into a church at Reigate Heath.
There have been increased partnerships between church management and private real estate companies to redevelop church properties into mixed uses. While it has garnered criticism, the partnership allows congregations to increase revenue while preserving the property.
Geographical distribution.
With the exception of Saudi Arabia and the Maldives, all sovereign states and dependent territories worldwide have church buildings. Among countries with a church, Afghanistan has the fewest churches globally, featuring only one official church: the Our Lady of Divine Providence Chapel in Kabul. Somalia follows closely, having once housed the Mogadishu Cathedral, along with the Saint Anthony of Padua Church in Somaliland. Other countries with a limited number of churches include Bhutan and Western Sahara.
In contrast, some estimates suggest that the United States has the highest number of churches in the world, with around 380,000, followed by Brazil and Italy. According to the Future for Religious Heritage, there are over 500,000 churches across Europe. Several cities are commonly known as the "City of Churches" due to their abundance of churches. These cities include Adelaide, Ani, Ayacucho, Kraków, Moscow, Montreal, Naples, Ohrid, Prague, Puebla, Querétaro, Rome, Salzburg, and Vilnius. Notably, Rome and New York City are home to the highest number of churches of any city in the world.
Although building churches is prohibited in Saudi Arabia, which has around 1.5 million Christians, the country contains the remnants of a historic church known as the Jubail Church, which dates back to the fourth century and was affiliated with the Church of the East. Discovered in 1986, the site was excavated by the Saudi Antiquities Department in 1987. As of 2008, the findings from this excavation had not been published, reflecting sensitivities regarding artifacts from non-Islamic religions. In the Maldives, which has approximately 1,400 Christians, building churches is prohibited. However, only foreign Christian workers are allowed to practice their religion privately. Despite the prohibition on church construction, both countries have secret home churches.
Christianity is the world's largest and most widespread religion, with over 2.3 billion followers. Churches are found across all seven continents, which are Asia, Africa, North America, South America, Antarctica, Europe, and Oceania. Antarctica is home to eight churches, with two additional churches located south of the Antarctic Convergence.
Many churches worldwide are of considerable historical, national, cultural, and architectural significance, with several recognized as UNESCO World Heritage Sites. According to the "Catholic Encyclopedia" the Cenacle (the site of the Last Supper) in Jerusalem was the "first Christian church". The Dura-Europos church in Syria is the oldest surviving church building in the world. Several authors have cited the Etchmiadzin Cathedral (Armenia's mother church) as the oldest cathedral in the world.
|
6326
|
48523215
|
https://en.wikipedia.org/wiki?curid=6326
|
Childe's Tomb
|
Childe's Tomb is a granite cross on Dartmoor, Devon, England. Although not in its original form, it is more elaborate than most of the crosses on Dartmoor, being raised upon a constructed base, and it is known that a kistvaen is underneath.
A well-known legend attached to the site, first recorded in 1630 by Tristram Risdon, concerns a wealthy hunter, Childe, who became lost in a snow storm and supposedly died there despite disembowelling his horse and climbing into its body for protection. The legend relates that Childe left a note of some sort saying that whoever found and buried his body would inherit his lands at Plymstock. After a race between the monks of Tavistock Abbey and the men of Plymstock, the Abbey won.
The tomb was virtually destroyed in 1812 by a man who stole most of the stones to build a house nearby, but it was partly reconstructed in 1890.
Description.
Childe's Tomb is a reconstructed granite cross on the south-east edge of Foxtor Mires, about 500 metres north of Fox Tor on Dartmoor, Devon, England at . According to William Burt, in his notes to "Dartmoor, a Descriptive Poem" by N. T. Carrington (1826), the original tomb consisted of a pedestal of three steps, the lowest of which was built of four stones each six feet long and twelve inches square. The two upper steps were made of eight shorter but similarly shaped stones, and on top was an octagonal block about three feet high with a cross fixed upon it.
The tomb lies on the line of several cairns that marked the east-west route of the ancient Monks' Path between Buckfast Abbey and Tavistock Abbey and it was no doubt erected here as part of that route: it would have been particularly useful in this part of the moor with few landmarks where a traveller straying from the path could easily end up in Foxtor Mires. Tristram Risdon, writing in about 1630, said that Childe's Tomb was one of three remarkable things in the Forest of Dartmoor (the others being Crockern Tor and Wistman's Wood). Risdon also stated that the original tomb bore an inscription: "They fyrste that fyndes and bringes mee to my grave, The priorie of Plimstoke they shall have", but no sign of this has ever been found.
Today the cross, which is a replacement, is about tall and across at the crosspiece, and it has its base in a socket stone which rests on a pedestal of granite blocks that raises the total height of the cross to . The original, now broken, socket stone for the cross lies nearby. The whole is surrounded by a circle of granite stones set on their edge which once surrounded the cairn—the rocks of which are now scattered around—that was originally built over a large kistvaen that still exists beneath the pedestal.
Destruction.
In the early 19th century, there was much interest in enclosing and "improving" the open moorland on Dartmoor, encouraged by Sir Thomas Tyrwhitt's early successes at Tor Royal near Princetown. Enclosure was aided by the greatly enhanced access provided by the construction of the first turnpike roads over the moor: the road between Ashburton and Two Bridges opened in around 1800, for instance. In February 1809 one Thomas Windeatt, from Bridgetown, Totnes, took over the lease of a plot of land (a "newtake") of about 582 acres in the valley of the River Swincombe. In 1812 Windeatt started to build a farmhouse, Fox Tor Farm, on his land and his workmen robbed the nearby Childe's Tomb of most of its stones for the building and its doorsteps.
In 1902, William Crossing wrote that he had been told by an old moorman that some of the granite blocks from the tomb's pedestal had also been used to make a clapper bridge across a stream flowing into the River Swincombe near the farm. The moorman also said that they had lettering on their undersides. This encouraged Crossing to arrange to lift the clapper bridge, but no inscription was found. However, he did locate nine out of the twelve stones that had made up the pedestal, as well as the broken socket stone for the cross.
Reconstruction.
Crossing rediscovered the original site of the tomb in 1882 and said that all that remained was a small mound and some half buried stones. He cleared out the kistvaen, reporting that it was long by wide and that unlike most kistvaens found on the moor, the stones lining it had apparently been shaped by man, which led him to suggest that it was less old than most. Having located most of the stones of the original tomb, Crossing thought that it could be rebuilt in its original form with little effort, but it was not to be.
J. Brooking Rowe, writing in 1895, states that the tomb was re-erected in 1890 under the direction of Mr. E. Fearnley Tanner, who said that he was dissatisfied with the result because several stones were missing and it was difficult to recreate the original character of the monument. Tanner was the honourable secretary of the Dartmoor Preservation Association, and this reconstruction was one of the first acts of that organisation. The replacement base and cross were made in Holne in 1885.
Childe the Hunter.
According to legend, the cross was erected over the kistvaen ('chest-stone' i.e. burial chamber) of Childe the Hunter, who was Ordulf, son of Ordgar, an Anglo-Saxon Earl of Devon in the 11th century. The name "Childe" is probably derived from the Old English word "cild" which was used as a title of honour.
Legend has it that Childe was in a party hunting on the moor when they were caught in some changeable weather. Childe became separated from the main party and was lost. In order to save himself from dying of exposure, he killed his horse, disembowelled it and crept inside the warm carcass for shelter. He nevertheless froze to death, but before he died, he wrote a note to the effect that whoever should find him and bury him in their church should inherit his Plymstock estate.
His body was found by the monks of Tavistock Abbey, who started to carry it back. However, they heard of a plot to ambush them by the people of Plymstock, at a bridge over the River Tavy. They took a detour and built a new bridge over the river, just outside Tavistock. They were successful in burying the body in the grounds of the Abbey and inherited the Plymstock estate.
The first account of this story is to be found in Risdon's "Survey of Devon" which was completed in around 1632:
Finberg pointed out, however, that a document of 1651 refers to Tavistock's guildhall as "Guilehall", so "Guilebridge" is more likely to be "guild bridge", probably because it was built or maintained by one of the town guilds.
In popular culture.
Devon folk singer Seth Lakeman sang about Childe the Hunter on his 2006 album "Freedom Fields".
|
6328
|
28481209
|
https://en.wikipedia.org/wiki?curid=6328
|
Cognate
|
In historical linguistics, cognates or lexical cognates are sets of words that have been inherited in direct descent from an etymological ancestor in a common parent language.
Because language change can have radical effects on both the sound and the meaning of a word, cognates may not be obvious, and it often takes rigorous study of historical sources and the application of the comparative method to establish whether lexemes are cognate.
Cognates are distinguished from loanwords, where a word has been borrowed from another language.
Name.
The English term "cognate" derives from Latin , meaning "blood relative".
Examples.
An example of cognates from the same Indo-European root are: "night" (English), "Nacht" (German), "nacht" (Dutch, Frisian), "nag" (Afrikaans), "Naach" (Colognian), "natt" (Swedish, Norwegian), "nat" (Danish), "nátt" (Faroese), "nótt" (Icelandic), "noc" (Czech, Slovak, Polish), ночь, "noch" (Russian), ноќ, "noć" (Macedonian), нощ, "nosht" (Bulgarian), "ніч", "nich" (Ukrainian), "ноч", "noch"/"noč" (Belarusian), "noč" (Slovene), "noć" (Serbo-Croatian), "nakts" (Latvian), "naktis" (Lithuanian), "nos" (Welsh/Cymraeg), νύξ, "nyx" (Ancient Greek), "νύχτα" / "nychta" (Modern Greek), "nakt-" (Sanskrit), "natë" (Albanian), "nox", gen. sg. "noctis" (Latin), "nuit" (French), "noche" (Spanish), "nochi" (Extremaduran), "nueche" (Asturian), "noite" (Portuguese and Galician), "notte" (Italian), "nit" (Catalan), "nuet/nit/nueit" (Aragonese), "nuèch" / "nuèit" (Occitan) and "noapte" (Romanian). These all mean 'night' and derive from the Proto-Indo-European 'night'. The Indo-European languages have hundreds of such cognate sets, though few of them are as neat as this.
The Arabic "salām", the Hebrew "shalom", the Assyrian Neo-Aramaic "shlama" and the Amharic "selam" 'peace' are cognates, derived from the Proto-Semitic *šalām- 'peace'.
The Paraguayan Guarani "panambi", the Eastern Bolivian Guarani "panapana", the Cocama and Omagua "panama", and the Sirionó "ana ana" are cognates, derived from the Old Tupi "panapana", 'butterfly', maintaining their original meaning in these Tupi languages. Brazilian Portuguese "panapanã" (flock of butterflies in flight) is a borrowing rather than a cognate of the other words.
Characteristics.
Cognates need not have the same meaning, as they may have undergone semantic change as the languages developed independently. For example English "starve" and Dutch "sterven" 'to die' or German "sterben" 'to die' all descend from the same Proto-Germanic verb, "*sterbaną" 'to die'.
Cognates also do not need to look or sound similar: English "father", French "père", and Armenian հայր ("hayr") all descend directly from Proto-Indo-European "*ph₂tḗr". An extreme case is Armenian երկու ("erku") and English "two", which descend from Proto-Indo-European "*dwóh₁"; the sound change "*dw" > "erk" in Armenian is regular.
Paradigms of conjugations or declensions, the correspondence of which cannot be generally due to chance, have often been used in cognacy assessment. However, beyond paradigms, morphosyntax is often excluded in the assessment of cognacy between words, mainly because structures are usually seen as more subject to borrowing. Still, very complex, non-trivial morphosyntactic structures can rarely take precedence over phonetic shapes to indicate cognates. For instance, Tangut, the language of the Xixia Empire, and one Horpa language spoken today in Sichuan, Geshiza, both display a verbal alternation indicating tense, obeying the same morphosyntactic collocational restrictions. Even without regular phonetic correspondences between the stems of the two languages, the cognatic structures indicate secondary cognacy for the stems.
False cognates.
False cognates are pairs of words that appear to have a common origin, but which in fact do not. For example, Latin and German both mean 'to have' and are phonetically similar. However, the words evolved from different Proto-Indo-European (PIE) roots: , like English "have", comes from PIE "*kh₂pyé-" 'to grasp', and has the Latin cognate "capere" 'to seize, grasp, capture'. , on the other hand, is from PIE "*gʰabʰ" 'to give, to receive', and hence cognate with English "give" and German .
Likewise, English "much" and Spanish look similar and have a similar meaning, but are not cognates: "much" is from Proto-Germanic "*mikilaz" < PIE "*meǵ-" and is from Latin "multum" < PIE "*mel-". A true cognate of "much" is the archaic Spanish 'big'.
Distinctions.
Cognates are distinguished from other kinds of relationships.
Related terms.
Etymon (ancestor word) and descendant words.
An etymon, or ancestor word, is the ultimate source word from which one or more cognates derive.
In other words, it is the source of related words in different languages.
For example, the etymon of both Welsh "ceffyl" and Irish "capall" is the Proto-Celtic *"kaballos" (all meaning "horse").
Descendants are words inherited across a language barrier, coming from a particular etymon in an ancestor language.
For example, Russian "мо́ре" and Polish "morze" are both descendants of Proto-Slavic *"moře" (meaning "sea").
Root and derivatives.
A root is the source of related words within a single language (no language barrier is crossed).
Similar to the distinction between "etymon" and "root", a nuanced distinction can sometimes be made between a "descendant" and a "derivative".
A derivative is one of the words which have their source in a root word, and were at some time created from the root word using morphological constructs such as suffixes, prefixes, and slight changes to the vowels or to the consonants of the root word.
For example "unhappy", "happily", and "unhappily" are all derivatives of the root word "happy".
The terms "root" and "derivative" are used in the analysis of morphological derivation within a language in studies that are not concerned with historical linguistics and that do not cross the language barrier.
|
6329
|
7903804
|
https://en.wikipedia.org/wiki?curid=6329
|
Chromatography
|
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the "mobile phase", which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the "stationary phase" is fixed. As the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be "preparative" or "analytical". The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
Etymology and pronunciation.
Chromatography, pronounced , is derived from Greek χρῶμα "chrōma", which means "color", and γράφειν "gráphein", which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
History.
The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term "chromatography" in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
Terms.
Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used.
Techniques by chromatographic bed shape.
Column chromatography.
Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample.
In 1978, W. Clark Still introduced a modified version of column chromatography called "flash column chromatography" (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage.
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells.
Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein.
Planar chromatography.
"Planar chromatography" is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.
Paper chromatography.
Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of "chromatography paper". The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far.
Thin-layer chromatography (TLC).
Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity.
Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step).
Displacement chromatography.
The basic principle of displacement chromatography is:
A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations.
Techniques by physical state of mobile phase.
Gas chromatography.
Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns.
Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research.
Liquid chromatography.
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography.
In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 () as the stationary phase) is termed reversed phase liquid chromatography (RPLC).
Supercritical fluid chromatography.
Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
Techniques by separation mechanism.
Affinity chromatography.
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained.
Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties.
However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity.
Ion exchange chromatography.
Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC.
Size-exclusion chromatography.
Size-exclusion chromatography (SEC) is also known as "gel permeation chromatography" (GPC) or "gel filtration chromatography" and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume).
Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions.
Expanded bed adsorption chromatographic separation.
An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed.
Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage.
Special techniques.
Reversed-phase chromatography.
Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.
Hydrophobic interaction chromatography.
Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution.
In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money.
If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins.
Hydrodynamic chromatography.
Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than formula_1 daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column.
HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important.
HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device.
Two-dimensional chromatography.
In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system.
Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation.
Simulated moving-bed chromatography.
The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely.
In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed.
True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well.
Pyrolysis gas chromatography.
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
Fast protein liquid chromatography.
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application.
Countercurrent chromatography.
Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force.
Hydrodynamic countercurrent chromatography (CCC).
The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently.
Centrifugal partition chromatography (CPC).
In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients ("KD") of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume.
Periodic counter-current chromatography.
In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion.
Chiral chromatography.
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.
Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases "nonracemic" mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ).
Aqueous normal-phase chromatography.
Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning.
Applications.
Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals.
|
6330
|
2842084
|
https://en.wikipedia.org/wiki?curid=6330
|
Clement Martyn Doke
|
Clement Martyn Doke (16 May 1893 in Bristol, United Kingdom – 24 February 1980 in East London, South Africa) was a South African linguist working mainly on African languages. Realizing that the grammatical structures of Bantu languages are quite different from those of European languages, he was one of the first African linguists of his time to abandon the Euro-centric approach to language description for a more locally grounded one. A most prolific writer, he published a string of grammars, several dictionaries, comparative work, and a history of Bantu linguistics.
Early life and career.
The Doke family had been engaged in missionary activity for the Baptist Church for some generations. His father, Reverend Joseph J. Doke, left England and travelled to South Africa in 1882, where he met and married Agnes Biggs. They returned to England, where Clement was born as the third of four children. The family moved to New Zealand and eventually returned to South Africa in 1903, where it later settled in Johannesburg.
At the age of 18, Clement received a bachelor's degree from Transvaal University College in Pretoria (now the University of Pretoria). He decided to devote his life to missionary activity. In 1913, he accompanied his father on a tour of north-western Rhodesia, to an area called Lambaland, now known as Ilamba. It is at the watershed of the Congo and Zambesi rivers. Part of the district lay in Northern Rhodesia and part of the Belgian Congo. The Cape-Cairo Railway threaded through its eastern portion; otherwise, most travel had to be on foot.
The Reverend William Arthur Phillips of the Nyasa Industrial Mission in Blantyre had established a Baptist mission there in 1905; it served an area of and 50,000 souls. The Dokes were supposed to investigate whether the mission in Lambaland could be taken over by the Baptist Union of South Africa. It was on that trip that Doke's father contracted enteric fever and died soon afterwards. Mahatma Gandhi attended the memorial service and addressed the congregation. Clement assumed his father's role.
The South African Baptists decided to take over Kafulafuta Mission, and its founder, Reverend Phillips, remained as superintendent. Clement Doke returned to Kafulafuta as missionary in 1914, followed by his sister Olive two years later.
Study of Lamba.
At first, Clement Doke was frustrated by his inability to communicate with the Lamba. The only written material available at the time was a translation of Jonah and a collection of 47 hymns. Soon, however, he mastered the language and published his first book, "Ifintu Fyakwe Lesa" ("The Things of God, a Primer of Scripture Knowledge") in 1917. He enrolled in Johannesburg as the extension of Transvaal University College for an MA degree. His thesis was published as "The Grammar of the Lamba language". The book is couched in traditional grammatical terms, as Doke had not yet established his innovative method to analyse and describe the Bantu languages. His later "Textbook of Lamba Grammar" is far superior in that respect.
Doke was also interested in ethnology. In 1931 he compiled "The Lambas of Northern Rhodesia", which remains one of the outstanding ethnographic descriptions of the peoples of Central Africa. For Doke, literacy was part of evangelisation since it was required so that people to appreciate the Bible's message, but it was only after his retirement that he completed the translation of the Bible into Lamba. It was published under the title of "Amasiwi AwaLesa" ("The Words of God") in 1959.
University of the Witwatersrand.
In 1919, Doke married Hilda Lehmann, who accompanied him back to Lambaland. Both contracted malaria during their work, and she was forbidden to return to Lambaland. Clement Doke also realised that his field work could not continue much longer, and he left in 1921. He was recruited by the newly founded University of the Witwatersrand. So that he could secure a qualification as a lecturer, the family moved to England, where he registered at the School of Oriental and African Studies. His major languages were Lamba and Luba, but as no suitable examiner was available, he eventually had to change his language to Zulu.
Doke took up his appointment in the new Department of Bantu Studies at the University of Witwatersrand in 1923. In 1925 he received his D.Litt. for his doctoral thesis "The Phonetics of the Zulu Language" and was promoted to Senior Lecturer. In 1931 he was appointed to the Chair of Bantu Studies and thus headed the Department of Bantu Studies. The department acted as a catalyst for the admission of Africans to the university. As early as 1925 a limited number were admitted to the vacation course in African Studies. Doke supported the appointment of Benedict Wallet Vilakazi as member of the staff, as he believed a native speaker was essential for acquiring a language. That provoked a storm of criticism and controversy from the public. Both of them collaborated on the "Zulu-English Dictionary". First published in 1948, it is still one of the best examples of lexicography for any Bantu language.
At the request of the government of Southern Rhodesia, Doke investigated the range of dialect diversity among the languages of the country and made recommendations for "Unified Shona", which formed the basis for Standard Shona. He devised a unified orthography based on the Zezuru, Karanga and Manyika dialects. However, Doke's orthography was never fully accepted, and the South African government introduced an alternative, which left Shona with two competing orthographies between 1935 and 1955.
During his tenure, Doke developed and promoted a method of linguistic analysis and description of the Bantu languages that was based upon the structure of these languages. The "Dokean model" continues to be one of the dominant models of linguistic description in Southern and Central Africa. His classification of the Bantu languages was for many years the dominant view of the interrelations among the African languages. He was also an early describer of Khoisan and Bantu click consonants, devising phonetic symbols for a number of them.
Doke served the University of the Witwatersrand until his retirement in 1953. He was awarded the honorary degree of Doctor of Letters by Rhodes University and the honorary degree of Doctor of Laws by the University of the Witwatersrand in 1972.
The former missionary always remained devoted to the Baptist Church. He was elected President of the South African Baptist Union in 1949 and spent a year visiting churches and mission stations. He used his presidential address in condemning the recently established apartheid policy: "I solemnly warn the Government that the spirit behind their apartheid legislation, and the way in which they are introducing discriminatory measures of all types today, will bring disaster upon this fair land of ours."
|
6331
|
48523215
|
https://en.wikipedia.org/wiki?curid=6331
|
Carl Meinhof
|
Carl Friedrich Michael Meinhof (23 July 1857 – 11 February 1944) was a German linguist and one of the first linguists to study African languages.
Early years and career.
Meinhof was born in Barzwitz near Rügenwalde in the Province of Pomerania, Kingdom of Prussia. He studied at the University of Tübingen and at the University of Greifswald. In 1905 he became professor at the School of Oriental Studies in Berlin. On 5 May 1933 he became a member of the Nazi Party.
Works.
His most notable work was developing comparative grammar studies of the Bantu languages, building on the pioneering work of Wilhelm Bleek. In his work, Meinhof looked at the common Bantu languages such as Swahili and Zulu to determine similarities and differences.
In his work, Meinhof looked at noun classes with all Bantu languages having at least 10 classes and with 22 classes of nouns existing throughout the Bantu languages, though his definition of noun class differs slightly from the accepted one, considering the plural form of a word as belonging to a different class from the singular form (thus leading, for example, to consider a language like French as having four classes instead of two). While no language has all 22 (later: 23) classes active, Venda has 20, Lozi has 18, and Ganda has 16 or 17 (depending on whether the locative class 23 "e-" is included). All Bantu languages have a noun class specifically for humans (sometimes including other animate beings).
Meinhof also examined other African languages, including groups classified at the time as Kordofanian, Bushman, Khoikhoi, and Hamitic.
Meinhof developed a comprehensive classification scheme for African languages. His classification was the standard one for many years (Greenberg 1955:3). It was replaced by those of Joseph Greenberg in 1955 and in 1963. His ideas influenced the notation of African-language phonetics as advanced in the mid-nineteenth century by the Egyptologist Karl Richard Lepsius and gave rise to what some called the "Meinhof-Lepsius system" of diacritical markers.
In 1902, Meinhof made recordings of East African music. These are among the first recordings made of traditional African music.
Controversial views.
In 1912, Carl Meinhof published "Die Sprachen der Hamiten" (The Languages of the Hamites). He used the term Hamitic. Meinhof's system of classification of the Hamitic languages was based on a belief that "speakers of Hamitic became largely coterminous with cattle herding peoples with essentially Caucasian origins, intrinsically different from and superior to the 'Negroes of Africa'." However, in the case of the so-called Nilo-Hamitic languages (a concept he introduced), it was based on the typological feature of gender and a "fallacious theory of language mixture." Meinhof did this in spite of earlier work by scholars such as Lepsius and Johnston demonstrating that the languages which he would later dub "Nilo-Hamitic" were in fact Nilotic languages with numerous similarities in vocabulary with other Nilotic languages.
Family.
Carl Meinhof was the great-uncle (the brother of the grandfather) of Ulrike Meinhof, a well known German journalist, who later became a founding member of the Red Army Faction (RAF), a left-wing militant group operating chiefly in West Germany in the 1970s and 1980s.
|
6335
|
7903804
|
https://en.wikipedia.org/wiki?curid=6335
|
Cucurbitaceae
|
The Cucurbitaceae (), also called cucurbits or the gourd family, are a plant family consisting of about 965 species in 101 genera. Those of most agricultural, commercial or nutritional value to humans include:
The plants in this family are grown around the tropics and in temperate areas of the world, where those with edible fruits were among the earliest cultivated plants in both the Old and New Worlds. The family Cucurbitaceae ranks among the highest of plant families for number and percentage of species used as human food. The name "Cucurbitaceae" comes to international scientific vocabulary from Neo-Latin, from "Cucurbita", the type genus, + "-aceae", a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word "", meaning "gourd".
Description.
Most of the plants in this family are annual vines, but some are woody lianas, thorny shrubs, or trees ("Dendrosicyos"). Many species have large, yellow or white flowers. The stems are hairy and pentangular. Tendrils are present at 90° to the leaf petioles at nodes. Leaves are exstipulate, alternate, simple palmately lobed or palmately compound. The flowers are unisexual, with male and female flowers on different plants (dioecious) or on the same plant (monoecious). The female flowers have inferior ovaries. The fruit is often a kind of modified berry called a pepo.
Fossil history.
One of the oldest fossil cucurbits so far is †"Cucurbitaciphyllum lobatum" from the Paleocene epoch, found at Shirley Canal, Montana. It was described for the first time in 1924 by the paleobotanist Frank Hall Knowlton. The fossil leaf is palmate, trilobed with rounded lobal sinuses and an entire or serrate margin. It has a leaf pattern similar to the members of the genera "Kedrostis", "Melothria" and "Zehneria".
Classification.
Tribal classification.
The most recent classification of Cucurbitaceae delineates 15 tribes:
Systematics.
Modern molecular phylogenetics suggest the following relationships:
Pests and diseases.
Sweet potato whitefly is the vector of a number of cucurbit viruses that cause yellowing symptoms throughout the southern United States.
|
6336
|
1289590950
|
https://en.wikipedia.org/wiki?curid=6336
|
Chorded keyboard
|
A keyset or chorded keyboard (also called a chorded keyset, "chord keyboard" or "chording keyboard") is a computer input device that allows the user to enter characters or commands formed by pressing several keys together, like playing a "chord" on a piano. The large number of combinations available from a small number of keys allows text or commands to be entered with one hand, leaving the other hand free. A secondary advantage is that it can be built into a device (such as a pocket-sized computer or a bicycle handlebar) that is too small to contain a normal-sized keyboard.
A chorded keyboard minus the board, typically designed to be used while held in the hand, is called a keyer. Douglas Engelbart introduced the chorded keyset as a computer interface in 1968 at what is often called "The Mother of All Demos".
Principles of operation.
Each key is mapped to a number and then can be mapped to a corresponding letter or command. By pressing two or more keys together the user can generate many combinations. In Engelbart's original mapping, he used five keys: 1, 2, 4, 8, 16. The keys were mapped as follows: a = 1, b = 2, c = 3, d = 4, and so on. If the user pressed keys 1 and 2 simultaneously, and then released the keys, 1 and 2 would be added to 3, and since C is the 3rd letter of the alphabet, and the letter "c" appeared. Unlike pressing a chord on a piano, the chord is recognized only after all the keys or mouse buttons are released. Since Engelbart introduced the keyset, several different designs have been developed based on similar concepts.
As a crude example, each finger might control one key which corresponds to one bit in a byte, so that using seven keys and seven fingers, one could enter any character in the ASCII set—if the user could remember the binary codes. Due to the small number of keys required, chording is easily adapted from a desktop to mobile environment.
Practical devices generally use simpler chords for common characters ("e.g.," Baudot), or may have ways to make it easier to remember the chords ("e.g.," Microwriter), but the same principles apply. These portable devices first became popular with the wearable computer movement in the 1980s.
Thad Starner from Georgia Institute of Technology and others published numerous studies showing that two-handed chorded text entry was faster and yielded fewer errors than on a QWERTY keyboard. Currently stenotype machines hold the record for fastest word entry. Many stenotype users can reach 300 words per minute. However, stenographers typically train for three years before reaching professional levels of speed and accuracy.
History.
The earliest known chord keyboard was part of the "five-needle" telegraph operator station, designed by Wheatstone and Cooke in 1836, in which any two of the five needles could point left or right to indicate letters on a grid. It was designed to be used by untrained operators (who would determine which keys to press by looking at the grid), and was not used where trained telegraph operators were available.
The first widespread use of a chord keyboard was in the stenotype machine used by court reporters, which was invented in 1868 and is still in use. The output of the stenotype was originally a phonetic code that had to be transcribed later (usually by the same operator who produced the original output), rather than arbitrary text—automatic conversion software is now commonplace.
In 1874, the five-bit Baudot telegraph code and a matching 5-key chord keyboard was designed to be used with the operator forming the codes manually. The code is optimized for speed and low wear: chords were chosen so that the most common characters used the simplest chords. But telegraph operators were already using typewriters with QWERTY keyboards to "copy" received messages, and at the time it made more sense to build a typewriter that could generate the codes automatically, rather than making them learn to use a new input device.
Some early keypunch machines used a keyboard with 12 labeled keys to punch the correct holes in paper cards. The numbers 0 through 9 were represented by one punch; 26 letters were represented by combinations of two punches, and symbols were represented by combinations of two or three punches.
Braille (a writing system for the blind) uses either 6 or 8 tactile 'points' from which all letters and numbers are formed. When Louis Braille invented it, it was produced with a needle holing successively all needed points in a cardboard sheet. In 1892, Frank Haven Hall, superintendent of the Illinois Institute for the Education of the Blind, created the Hall Braille Writer, which was like a typewriter with 6 keys, one for each dot in a braille cell. The Perkins Brailler, first manufactured in 1951, uses a 6-key chord keyboard (plus a spacebar) to produce braille output, and has been very successful as a mass market affordable product. Braille, like Baudot, uses a number symbol and a shift symbol, which may be repeated for shift lock, to fit numbers and upper case into the 63 codes that 6 bits offer.
After World War II, with the arrival of electronics for reading chords and looking in tables of "codes", the postal sorting offices started to research chordic solutions to be able to employ people other than trained and expensive typists. In 1954, an important concept was discovered: chordic production is easier to master when the production is done at the release of the keys instead of when they are pressed.
Researchers at IBM investigated chord keyboards for both typewriters and computer data entry as early as 1959, with the idea that it might be faster than touch-typing if some chords were used to enter whole words or parts of words. A 1975 design by IBM Fellow Nat Rochester had 14 keys that were dimpled on the edges as well as the top, so one finger could press two adjacent keys for additional combinations. Their results were inconclusive, but research continued until at least 1978.
Doug Engelbart began experimenting with keysets to use with the mouse in the mid-1960s. In a famous 1968 demonstration, Engelbart introduced a computer human interface that included the QWERTY keyboard, a three button mouse, and a five key keyset. Engelbart used the keyset with his left hand and the mouse with his right to type text and enter commands. The mouse buttons marked selections and confirmed or aborted commands.
Users in Engelbart's Augmentation Research Center at SRI became proficient with the mouse and keyset. In the 1970s the funding Engelbart's group received from the Advanced Research Projects Agency (ARPA) was cut and many key members of Engelbart's team went to work for Xerox PARC where they continued to experiment with the mouse and keyset. Keychord sets were used at Xerox PARC in the early 1980s, along with mice, GUIs, on the Xerox Star and Alto workstations. A one-button version of the mouse was incorporated into the Apple Macintosh but Steve Jobs decided against incorporating the chorded keyset.
In the early 1980s, Philips Research labs at Redhill, Surrey did a brief study into small, cheap keyboards for entering text on a telephone. One solution used a grid of hexagonal keys with symbols inscribed into dimples in the keys that were either in the center of a key, across the boundary of two keys, or at the joining of three keys. Pressing down on one of the dimples would cause either one, two or three of the hexagonal buttons to be depressed at the same time, forming a chord that would be unique to that symbol. With this arrangement, a nine button keyboard with three rows of three hexagonal buttons could be fitted onto a telephone and could produce up to 33 different symbols. By choosing widely separated keys, one could employ one dimple as a 'shift' key to allow both letters and numbers to be produced. With eleven keys in a 3/4/4 arrangement, 43 symbols could be arranged allowing for lowercase text, numbers and a modest number of punctuation symbols to be represented along with a 'shift' function for accessing uppercase letters. While this had the advantage of being usable by untrained users via 'hunt and peck' typing and requiring one less key switch than a conventional 12 button keypad, it had the disadvantage that some symbols required three times as much force to depress them as others which made it hard to achieve any speed with the device. That solution is still alive and proposed by Fastap and Unitap among others, and a commercial phone has been produced and promoted in Canada during 2006.
Standards.
Historically, the baudot and braille keyboards were standardized to some extent, but they are unable to replicate the full character set of a modern keyboard. Braille comes closest, as it has been extended to eight bits.
The only proposed modern standard, GKOS (or Global Keyboard Open Standard) can support most characters and functions found on a computer keyboard but has had little commercial development. There is, however, a GKOS keyboard application available for iPhone since May 8, 2010, for Android since October 3, 2010 and for MeeGo Harmattan since October 27, 2011.
Stenography.
Stenotype machines, sometimes used by court reporters, use a chording keyboard to represent sounds: on the standard keyboard, the U represents the sound and word, 'you', and the three-key trigraph KAT represents the sound and word 'cat'. The stenotype keyboard is explicitly ordered: in KAT, K, on the left, is the starting sound. P, S, and T, which are common starting sounds and also common ending sounds, are available on both sides of the keyboard: POP is a 3-key chord, using both P keys.
Open-source designs.
Multiple open-source keyer/keyset designs are available, such as the pickey, a PS/2 device based on the PIC microcontroller; the spiffchorder, a USB device based on the Atmel AVR family of microcontrollers; the FeatherChorder, a BLE chorder based on the Adafruit Feather, an all-in-one board incorporating an Arduino-compatible microcontroller; and the GKOS keypad driver for Linux as well as the Gkos library for the Atmel/Arduino open-source board.
Plover is a free, open-source, cross-platform program intended to bring real-time stenographic technology not just to stenographers, but also to hobbyists using anything from professional Stenotype machines to low-cost NKRO gaming keyboards. It is available for Linux, Windows, and macOS.
Joy2chord is a chorded keyboard driver for Linux. With a configuration file, any joystick or gamepad can be turned into a chorded keyboard. This design philosophy was decided on to lower the cost of building devices, and in turn lower the entry barrier to becoming familiar with chorded keyboards. Macro keys, and multiple modes are also easily implemented with a user space driver.
Commercial devices.
One minimal chordic keyboard example is Edgar Matias' Half-Qwerty keyboard described in patent circa 1992 that produces the letters of the missing half when the user simultaneously presses the space bar along with the mirror key. INTERCHI '93 published a study by Matias, MacKenzie and Buxton showing that people who have already learned to touch-type can quickly recover 50 to 70% of their two-handed typing speed. The loss contributes to the speed discussion above. It is implemented on two popular mobile phones, each provided with software disambiguation, which allows users to avoid using the space-bar.
"Multiambic" keyers for use with wearable computers were invented in Canada in the 1970s. Multiambic keyers are similar to chording keyboards but without the board, in that the keys are grouped in a cluster for being handheld, rather than for sitting on a flat surface.
Chording keyboards are also used as portable but two handed input devices for the visually impaired (either combined with a refreshable braille display or vocal synthesis). Such keyboards use a minimum of seven keys, where each key corresponds to an individual braille point, except one key which is used as a spacebar. In some applications, the spacebar is used to produce additional chords which enable the user to issue editing commands, such as moving the cursor, or deleting words. Note that the number of points used in braille computing is not 6, but 8, as this allows the user, among other things, to distinguish between small and capital letters, as well as identify the position of the cursor. As a result, most newer chorded keyboards for braille input include at least nine keys.
Touch screen chordic keyboards are available to smartphone users as an optional way of entering text. As the number of keys is low, the button areas can be made bigger and easier to hit on the small screen. The most common letters do not necessarily require chording as is the case with the GKOS keyboard optimised layouts (Android app) where the twelve most frequent characters only require single keys.
The DecaTxt one-handed Bluetooth Chord keyboard, by IN10DID, Inc. has ten keys, two at each finger and is able to replace all standard keystrokes with chords of four keys or less. It is small at 3.25"x 2.25" and weighs about 2 ounces, making it quite wearable strapped to either hand for use while walking. DecaTxt is generally considered as assistive technology since it works with a variety of issues such as limited vision, limb loss, shaking and poor motor skills.
The company CharaChorder commercially sells chorded entry devices. Their first commercially available device is the CharaChorder One, which features a split design with each having access to 9 switches that can be moved in five directions (up, down, left, right, and pressed) in contrast to typical keyboards. This device allows for both chorded entry as well as traditional character entry. The set of words that can be chorded can be dynamically changed by the user in real time, but by default includes the 300 most common words in the English language. This chorded entry feature allows for potentially extremely fast typing speeds, so much so the founder of the company has been banned from online typing competitions. Additionally, they create the Charachorder Lite with a more traditional keyboard design. The manufacturer claimed that users of the Charachorder One can reach speeds of 300 words per minute, while users of the Charachorder Lite can reach 250 words per minute.
Historical.
The WriteHander, a 12-key chord keyboard from NewO Company, appeared in 1978 issues of ROM Magazine, an early microcomputer applications magazine.
Another early commercial model was the six-button Microwriter, designed by Cy Endfield and Chris Rainey, and first sold in 1980. Microwriting is the system of chord keying and is based on a set of mnemonics. It was designed only for right-handed use.
In 1982 the Octima 8 keys cord keyboard was presented by Ergoplic Kebords Ltd an Israeli Startup that was founded by Israeli researcher with intensive experience in Man Machine Interface design. The keyboard had 8 keys one for each finger and additional 3 keys that enabled the production of numbers, punctuations and control functions. The keyboard was fully compatible with the IBM PC and AT keyboards and had an Apple IIe version as well. Its key combinations were based on a mnemonic system that enabled fast and easy touch type learning. Within a few hours the user could achieve a typing speed similar to hand writing speed. The unique design also gave a relief from hand stress (Carpal Tunnel Syndrome) and allowed longer typing sessions than traditional keyboards. It was multi-lingual supporting English, German, French and Hebrew.
The BAT is a 7-key hand-sized device from Infogrip, and has been sold since 1985. It provides one key for each finger and three for the thumb. It is proposed for the hand which does not hold the mouse, in an exact continuation of Engelbart's vision.
|
6337
|
33450425
|
https://en.wikipedia.org/wiki?curid=6337
|
Carolyn Beug
|
Carolyn Ann Mayer-Beug (December 11, 1952 – September 11, 2001) was a filmmaker and video producer from Santa Monica, California. She died in the September 11 attacks as a passenger of the American Airlines Flight 11.
Career.
In addition to her work as video producer, Beug also directed three music videos for country singer Dwight Yoakam: "Ain't That Lonely Yet", "A Thousand Miles from Nowhere" and "Fast as You." Beug co-directed the former two videos with Yoakam and was the sole director of the latter video. She won an MTV Video Music award for the Van Halen music video of the song "Right Now", which she produced. She also served as senior vice president of Walt Disney Records.
Personal life.
Beug lived in a Tudor-style home in the North 25th Street neighborhood. She hosted an annual backyard barbecue for the Santa Monica High School cross country and track team, which her daughters captained. Beug was a Latter-day Saint.
Death and legacy.
Beug was killed at the age of 48 in the crash of American Airlines Flight 11 in the September 11 attacks. At the time of her death, Carolyn Beug was working on a children's book about Noah's Ark which was to be told from Noah's wife's point of view. On the plane with her was her mother, Mary Alice Wahlstrom. Beug was survived by her twin eighteen-year-old daughters Lauren and Lindsey Mayer-Beug, her 13-year-old son, Nick, and her husband, John Beug, a senior vice president in charge of filmed production for Warner Brothers' record division. She was returning home from taking her daughters to college at the Rhode Island School of Design.
At the National September 11 Memorial, Beug is memorialized at the North Pool, on Panel N-1.
|
6339
|
27823944
|
https://en.wikipedia.org/wiki?curid=6339
|
Cell biology
|
Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of the structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry.
History.
Cells were first seen in 17th-century Europe with the invention of the compound microscope. In 1665, Robert Hooke referred to the building blocks of all living organisms as "cells" (published in "Micrographia") after looking at a piece of cork and observing a structure reminiscent of a monastic cell; however, the cells were dead. They gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. Many years later, in 1831, Robert Brown discovered the nucleus. All of this preceded the cell theory which states that all living things are made up of cells and that cells are organisms' functional and structural units. This was ultimately concluded by plant scientist Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell and instead are studied in the microbiology subclass of virology.
Techniques.
Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications. The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below:
Cell types.
There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus. Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista.
They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments.
Structure and function.
Structure of eukaryotic cells.
Eukaryotic cells are composed of the following organelles:
Eukaryotic cells may also be composed of the following molecular components:
Cell metabolism.
Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways and also sustaining the main cell organelles such as the nucleus, the mitochondria, the cell membrane etc. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose.
Cell signaling.
Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through:
Growth and development.
Eukaryotic cell cycle.
Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death.
The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells.
The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival.
Cell mortality, cell lineage immortality.
The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination.
Pathology.
The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer.
Cell cycle and DNA damage repair system.
The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints
The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes. The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps. The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position.
Mitochondrial membrane dynamics.
Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, including endomembrane fusion and fragmentation (separation) and ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution.
Autophagy.
Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis.
|
6340
|
1066912
|
https://en.wikipedia.org/wiki?curid=6340
|
Canadian English
|
Canadian English (CanE, CE, en-CA) encompasses the varieties of English used in Canada. According to the 2016 census, English was the first language of 19.4 million Canadians or 58.1% of the total population; the remainder spoke French (20.8%) or other languages (21.1%). In the province of Quebec, only 7.5% of the population speak English as their mother tongue, while most of Quebec's residents are native speakers of Quebec French.
The most widespread variety of Canadian English is Standard Canadian English, spoken in all the western and central provinces of Canada (varying little from Central Canada to British Columbia), plus in many other provinces among urban middle- or upper-class speakers from natively English-speaking families. Standard Canadian English is distinct from Atlantic Canadian English (its most notable subset being Newfoundland English), and from Quebec English. Accent differences can also be heard between those who live in urban centres versus those living in rural settings.
While Canadian English tends to be close to American English in most regards, classifiable together as North American English, Canadian English also possesses elements from British English as well as some uniquely Canadian characteristics. The precise influence of American English, British English, and other sources on Canadian English varieties has been the ongoing focus of systematic studies since the 1950s. Standard Canadian and General American English share identical or near-identical phonemic inventories, though their exact phonetic realizations may sometimes differ.
Canadians and Americans themselves often have trouble differentiating their own two accents, particularly since Standard Canadian and Western United States English have been undergoing a similar vowel shift since the 1980s.
History.
Canadian English as an academic field of inquiry solidified around the time of the Second World War. While early linguistic approaches date back to the second half of the 19th century, the first textbook to consider Canadian English in one form or another was not published until 1940. Walter S. Avis was its most forceful spokesperson after the Second World War until the 1970s. His team of lexicographers managed to date the term "Canadian English" to a speech by a Scottish Presbyterian minister, the Reverend Archibald Constable Geikie, in an address to the Canadian Institute in 1857 (see DCHP-1 Online, s.v. "Canadian English", Avis "et al.," 1967). Geikie, a Scottish-born Canadian, reflected the Anglocentric attitude that would be prevalent in Canada for the next hundred years when he referred to the language as "a corrupt dialect", in comparison with what he considered the proper English spoken by immigrants from Britain.
One of the earliest influences on Canadian English was the French language, which was brought to Canada by the French colonists in the 17th century. French words and expressions were adopted into Canadian English, especially in the areas of cuisine, politics, and social life. For example, words like "poutine" and "toque" are uniquely Canadian French terms that have become part of the Canadian English lexicon.
An important influence on Canadian English was British English, which was brought to Canada by British settlers in the 18th and 19th centuries. Canadian English borrowed many words and expressions from British English, including words like "lorry, flat", and "lift". However, Canadian English also developed its own unique vocabulary, including words like "toque, chesterfield", and "double-double". In the early 20th century, western Canada was largely populated by farmers from Central and Eastern Europe who were not anglophones. At the time, most anglophones there were re-settlers from Ontario or Quebec who had British, Irish, or Loyalist ancestry, or some mixture of these. Throughout the 20th century, the prairies underwent anglicization and linguistic homogenization through education and exposure to Canadian and American media.
American English also had a significant impact on Canadian English's origins as well as again in the 20th century and since then as a result of increased cultural and economic ties between the two countries. American English terms like gasoline, truck, and apartment are commonly used in Canadian English.
The growth of Canadian media, including television, film, and literature, has also played a role in shaping Canadian English. Chambers (1998) notes that Canadian media has helped to create new words and expressions that reflect Canadian culture and values. Canadian institutions, such as the CBC and the Canadian Oxford Dictionary, have also played a role in promoting and defining Canadian English.
In addition to these influences, Canadian English has also been minorly shaped by Indigenous languages. Indigenous words such as "moose, toboggan, "and "moccasin" have become part of the Canadian English lexicon.
Canadian English is the product of five waves of immigration and settlement over a period of more than two centuries. The first large wave of permanent English-speaking settlement in Canada, and linguistically the most important, was the influx of Loyalists fleeing the American Revolution, chiefly from the Mid-Atlantic States—as such, Canadian English is believed by some scholars to have derived from northern American English. Canadian English has been developing features of its own since the early 19th century. The second wave from Britain and Ireland was encouraged to settle in Canada after the War of 1812 by the governors of Canada, who were worried about American dominance and influence among its citizens. Further waves of immigration from around the globe peaking in 1910, 1960, and at the present time had a lesser influence, but they did make Canada a multicultural country, ready to accept linguistic change from around the world during the current period of globalization.
The languages of Aboriginal peoples in Canada started to influence European languages used in Canada even before widespread settlement took place, and the French of Lower Canada provided vocabulary, with words such as "tuque" and "portage", to the English of Upper Canada.
Overall, the history of Canadian English is a reflection of the country's diverse linguistic and cultural heritage. While Canadian English has borrowed many words and expressions from other languages, it has also developed its own unique vocabulary and pronunciation that reflects the country's distinct identity.
Historical linguistics.
Studies on earlier forms of English in Canada are rare. Yet connections with other work to historical linguistics can be forged. An overview of diachronic work on Canadian English, or diachronically relevant work, is Dollinger. Until the 2000s, basically all commentators on the history of CanE have argued from the "language-external" history, i.e. social and political history. An exception has been in the area of lexis, where Avis "et al." 1967 "Dictionary of Canadianisms on Historical Principles" offered real-time historical data through its quotations. Starting in the 2000s, historical linguists have started to study earlier Canadian English with historical linguistic data. DCHP-1 is now available in open access. Most notably, Dollinger (2008) pioneered the historical corpus linguistic approach for English in Canada with CONTE (Corpus of Early Ontario English, 1776–1849) and offers a developmental scenario for 18th- and 19th-century Ontario.
Canadian dainty.
Historically, Canadian English included a class-based sociolect known as "Canadian dainty". Treated as a marker of upper-class prestige in the 19th and early 20th centuries, Canadian dainty was marked by the use of some features of British English pronunciation, resulting in an accent similar, but not identical, to the Mid-Atlantic accent known in the United States. This accent faded in prominence following World War II, when it became stigmatized as pretentious, and is now rare. The Governor General Vincent Massey, the writer and broadcaster Peter Stursberg, the actor Lorne Greene, and the actor Christopher Plummer are examples of men who were raised in Canada but spoke with a British-influenced accent.
Spelling.
Canadian spelling of the English language combines British and American conventions, the two dominant varieties, and adds some domestic idiosyncrasies. For many words, American and British spelling are both acceptable. Spelling in Canadian English co-varies with regional and social variables, somewhat more so, perhaps, than in the two dominant varieties of English, yet general trends have emerged since the 1970s.
Canadian spelling conventions can be partly explained by Canada's trade history. For instance, Canada's automobile industry has been dominated by American firms from its inception, explaining why Canadians use the American spelling of "tire" (hence, "Canadian Tire") and American terminology for automobiles and their parts (for example, "truck" instead of "lorry", "gasoline" instead of "petrol", "trunk" instead of "boot").
Canada's political history has also had an influence on Canadian spelling. Canada's first prime minister, John A. Macdonald, once advised the Governor General of Canada to issue an order-in-council directing that government papers be written in the British style.
A contemporary reference for formal Canadian spelling is the spelling used for Hansard transcripts of the Parliament of Canada . Many Canadian editors, though, use the "Canadian Oxford Dictionary", often along with the chapter on spelling in "Editing Canadian English", and, where necessary (depending on context), one or more other references.
Throughout part of the 20th century, some Canadian newspapers adopted American spellings, for example, "color" as opposed to the British-based "colour". Some of the most substantial historical spelling data can be found in Dollinger (2010) and Grue (2013). The use of such spellings was the long-standing practice of the Canadian Press perhaps since that news agency's inception, but visibly the norm prior to World War II. The practice of dropping the letter "u" in such words was also considered a labour-saving technique during the early days of printing in which movable type was set manually. Canadian newspapers also received much of their international content from American press agencies, so it was much easier for editorial staff to leave the spellings from the wire services as provided.
In the 1990s, Canadian newspapers began to adopt the British spelling variants such as "-our" endings, notably with "The Globe and Mail" changing its spelling policy in October 1990. Other Canadian newspapers adopted similar changes later that decade, such as the Southam newspaper chain's conversion in September 1998. The "Toronto Star" adopted this new spelling policy in September 1997 after that publication's ombudsman discounted the issue earlier in 1997. The "Star" had always avoided using recognized Canadian spelling, citing the "Gage Canadian Dictionary" in their defence. Controversy around this issue was frequent. When the "Gage Dictionary" finally adopted standard Canadian spelling, the "Star" followed suit. Some publishers, e.g. "Maclean's", continue to prefer American spellings.
Standardization, codification and dictionaries.
The first series of dictionaries of Canadian English was published by Gage Ltd. under the chief-editorships of Charles J. Lovell and Walter S. Avis as of 1960 and the "Big Six" editors plus Faith Avis. The "Beginner's Dictionary" (1962), the "Intermediate Dictionary" (1964) and, finally, the "Senior Dictionary" (1967) were milestones in Canadian English lexicography. In November 1967 A Dictionary of Canadianisms on Historical Principles (DCHP) was published and completed the first edition of Gage's Dictionary of Canadian English Series. The DCHP documents the historical development of Canadian English words that can be classified as "Canadianisms". It therefore includes words such as mukluk, Canuck, and bluff, but does not list common core words such as desk, table or car. Many secondary schools in Canada use the graded dictionaries. The dictionaries have regularly been updated since: the "Senior Dictionary," edited by Robert John Gregg, was renamed "Gage Canadian Dictionary". Its fifth edition was printed beginning in 1997. Gage was acquired by Thomson Nelson around 2003. The latest editions were published in 2009 by HarperCollins. On 17 March 2017 a second edition of DCHP, the online Dictionary of Canadianisms on Historical Principles 2 (DCHP-2), was published. DCHP-2 incorporates the c. 10 000 lexemes from DCHP-1 and adds c. 1 300 novel meanings or 1 002 lexemes to the documented lexicon of Canadian English.
In 1998, Oxford University Press produced a Canadian English dictionary, after five years of lexicographical research, entitled "The Oxford Canadian Dictionary". A second edition, retitled "The Canadian Oxford Dictionary", was published in 2004. Just as the older dictionaries it includes uniquely Canadian words and words borrowed from other languages, and surveyed spellings, such as whether "colour" or "color" was the more popular choice in common use. Paperback and concise versions (2005, 2006), with minor updates, are available.
Since 2022, the Editors' Association of Canada has been leading the writing of a new "Canadian English Dictionary" within a national dictionary Consortium. The Consortium comprises the Editors' Association of Canada, the UBC Canadian English Lab, and Queen's University's Strategy Language Unit.
Phonology and phonetics.
It is quite common for Canadian English speakers to have the cot-caught merger, the father-bother merger, the Low-Back-Merger Shift (with the vowel in words such as "trap" moving backwards), Canadian raising (words such as "like" and "about" pronounced with a higher first vowel in the diphthong) and no trap-bath split. Canadian raising is when the onsets of diphthongs and get raised to or before voiceless segments. There are areas in the eastern U.S. where some words are pronounced with Canadian raising.
Some young Canadians may show Goose-fronting. U.S. southern dialects have long had goose-fronting, but this goose-fronting among young Canadians and Californians is more recent. Some young Californians also show signs of the Low-Back-Merger Shift. The cot-caught merger is perhaps not general in the U.S., but younger speakers seem more likely to have it.
The Canadian Oxford Dictionary lists words such as "no" and "way" as having a long monophthong vowel sound, whereas American dictionaries usually have these words ending in an upglide .
In terms of the major sound systems (phonologies) of English around the world, Canadian English aligns most closely to American English. Some dialectologists group Canadian and American English together under a common North American English sound system. The mainstream Canadian accent ("Standard Canadian") is often compared to the General American accent, a middle ground lacking in noticeable regional features.
Western Canada (British Columbia, Alberta, Saskatchewan, Manitoba) shows the largest dialect diversity. Northern Canada is, according to William Labov, a dialect region in formation where a homogeneous English dialect has not yet formed. Labov's research focused on urban areas, and did not survey the country, but they found similarities among the English spoken in Ottawa, Toronto, Calgary, Edmonton and Vancouver. Labov identifies an "Inland Canada" region that concentrates all of the defining features of the dialect centred on the Prairies (a region in Western Canada that mainly includes Alberta, Saskatchewan, and Manitoba and is known for its grasslands and plains), with more variable patterns including the metropolitan areas of Vancouver and Toronto. This dialect forms a dialect continuum with Western US English, sharply differentiated from Inland Northern US English of the central and eastern Great Lakes region where the Northern Cities Shift is sending front vowels in the opposite direction to the Low-Back-Merger Shift heard in Canada and California.
Standard.
Standard Canadian English is socially defined. Standard Canadian English is spoken by those who live in urban Canada, in a middle-class job (or one of their parents holds such employment), who are second generation or later (born and raised in Canada) and speak English as (one of their) dominant language(s) (Dollinger 2019a, adapted from Chambers 1998). It is the variety spoken, in Chambers' (1998: 252) definition, by Anglophone or multilingual residents, who are second generation or later (i.e. born in Canada) and who live in urban settings. Applying this definition, c. 36% of the Canadian population speak Standard Canadian English in the 2006 population, with 38% in the 2011 census.
Regional variation.
The literature has for a long time conflated the notions of Standard Canadian English (StCE) and regional variation. While some regional dialects are close to Standard Canadian English, they are not identical to it. To the untrained ear, for instance, a BC middle-class speaker from a rural setting may seemingly be speaking Standard Canadian English, but, given Chambers' definition, such a person, because of the rural provenance, would not be included in the accepted definition (see the previous section). The "Atlas of North American English", while being the best source for US regional variation, is not a good source for Canadian regional variation, as its analysis is based on only 33 Canadian speakers. Boberg's (2005, 2008) studies offer the best data for the delimitation of dialect zones. The results for vocabulary and phonetics overlap to a great extent, which has allowed the proposal of dialect zones. Dollinger and Clarke distinguish between:
Indigenous.
The words "Aboriginal" and "Indigenous" are capitalized when used in a Canadian context.
First Nations and Inuit from Northern Canada speak a version of Canadian English influenced by the phonology of their first languages. Non-indigenous Canadians in these regions are relatively recent arrivals, and have not produced a dialect that is distinct from southern Canadian English.
Overall, First Nations Canada English dialects rest between language loss and language revitalization. British Columbia has the greatest linguistic diversity, as it is home to about half of the Indigenous languages spoken in Canada. Most of the languages spoken in the province are endangered due to the small number of speakers. To some extent, the dialects reflect the historical contexts where English has been a major colonizing language. The dialects are also a result of the late stages of depidginization and decreolization, which resulted in linguistic markers of Indigenous identity and solidarity. These dialects are observed to have developed a lingua franca due to the contact between English and Indigenous populations, and eventually, the various dialects began to converge with standard English.
Certain First Nations English have also shown to have phonological standard Canadian English, thus resulting in a more distinct dialect formation. Plains Cree, for instance, is a language that has less phonological contrasts compared to standard Canadian English. Plains Cree has no voicing contrast. The stops , , and are mostly voiceless and unaspirated, though they may vary in other phonetic environments from voiceless to voiced. Plains Cree also does not have the liquids or fricatives found in the standard form. Dene Suline, on the other hand, has more phonological contrasts, resulting in the use of features not seen in the standard form. The language has 39 phonemic consonants and a higher proportion of glottalized consonants.
Maritimes.
Many in the Maritime provinces – Nova Scotia, New Brunswick and Prince Edward Island – have an accent that sounds more like Scottish English and, in some places, Irish English than General American. Outside of major communities, dialects can vary markedly from community to community, as well as from province to province, reflecting ethnic origin as well as a past in which there were few roads and many communities, with some isolated villages. Into the 1980s, residents of villages in northern Nova Scotia could identify themselves by dialects and accents distinctive to their village. The dialects of Prince Edward Island are often considered the most distinct grouping.
The phonology of Maritimer English has some unique features:
Nova Scotia
As with many other distinct dialects, vowels are a marker of Halifax English as a distinctive variant of Canadian English. Typically, Canadian dialects have a merger of the low back vowels in palm, lot, thought and cloth. The merged vowel in question is usually /ɑ/ or sometimes the rounded variant /ɒ/. Meanwhile, in Halifax, the vowel is raised and rounded. For example, body; popped; and gone. In the homophones, caught-cot and stalk-stock, the rounding in the merged vowel is also much more pronounced here than in other Canadian varieties. The Canadian Shift is also not as evident in the traditional dialect. Instead, the front vowels are raised. For example, the vowel in had is raised to [hæed]; and camera is raised to [kæmra].
Although it has not been studied extensively, the speech of Cape Breton specifically seems to bear many similarities with the nearby island of Newfoundland, which is often why Westerners can have a hard time differentiating the two accents. For instance, they both use the fronting of the low back vowel. These similarities can be attributed to geographic proximity, the fact that about one-quarter of the Cape Breton population descends from Irish immigrants (many of whom arrived via Newfoundland) and the Scottish and Irish influences on both provinces. The speech of Cape Breton can almost be seen as a continuum between the two extremes of the Halifax variant and the Newfoundland variant. In addition, there is heavy influence of standard varieties of Canadian English on Cape Breton English, especially in the diphthongization of the goat and goose vowels and the frequent use of Canadian raising.
Newfoundland.
Compared to the commonly spoken English dominating neighbouring provinces, Newfoundland English is famously distinct in its dialects and accents. Newfoundland English differs in vowel pronunciation, morphology, syntax, and preservation of archaic adverbial-intensifiers. The dialect varies markedly from community to community, as well as from region to region. Its distinctiveness partly results from a European settlement history that dates back centuries, which explains Newfoundland's most notable linguistic regions: an Irish-settled area in the southeast (the southern Avalon Peninsula) and an English-settled area in the southwest.
A well-known phonetic feature many Newfoundland speakers possess is the kit-dress merger. The mid lax /ɛ/ here is raised to the high lax stressed /ɪ/, particularly before oral stops and nasals, so consequently "pen" is pronounced more like "pin".
Another phonetic feature more unique to Newfoundland English is TH-stopping. Here, the voiceless dental fricative /θ/ in words like "myth" and "width" are pronounced more like "t" or the voiced dental fricative /ð/ in words like "the" and "these". TH-stopping is more common for /ð/, especially in unstressed function words (e.g. that, those, their, etc.).
Ontario.
Canadian raising is quite strong throughout the province of Ontario, except within the Ottawa Valley. The introduction of Canadian raising to Canada can be attributed to the Scottish and Irish immigrants who arrived in the 18th and 19th centuries. The origins of Canadian raising to Scotland and revealed that the Scottish dialects spoken by these immigrants had a probable impact on its development. This feature impacts the pronunciation of the sound in "right" and the sound in "lout". Canadian Raising indicates a scenario where the start of the diphthong is nearer to the destination of the glide before voiceless consonants than before voiced consonants. The Canadian Shift is also a common vowel shift found in Ontario. The retraction of was found to be more advanced for women in Ontario than for people from the Prairies or Atlantic Canada and men.
In the southern part of Southwestern Ontario (roughly in the line south from Sarnia to St. Catharines), despite the existence of many characteristics of West/Central Canadian English, many speakers, especially those under 30, speak a dialect influenced by the Inland Northern American English dialect (in part due to proximity to cities like Detroit and Buffalo, New York) though there are minor differences such as Canadian raising (e.g. "ice" vs "my").
The north and northwestern parts of Southwestern Ontario, the area consisting of the Counties of Huron, Bruce, Grey, and Perth, referred to as the "Queen's Bush" in the 19th century, did not experience communication with the dialects of the southern part of Southwestern Ontario and Central Ontario until the early 20th century. Thus, a strong accent similar to Central Ontarian is heard, yet many different phrasings exist. It is typical in the area to drop phonetic sounds to make shorter contractions, such as: "prolly" (probably), "goin" (going), and "Wuts goin' on tonight? D'ya wanna do sumthin'?" It is particularly strong in the County of Bruce, so much that it is commonly referred to as being the Bruce Cownian (Bruce Countian) accent. Also, merge with to , with "were" sounding more like "wear".
Residents of the Golden Horseshoe (including the Greater Toronto Area) are known to merge the second with the in "Toronto", pronouncing the name variously as or . This is not unique to Toronto; Atlanta is often pronounced "Atlanna" by residents. Sometimes is elided altogether, resulting in "Do you want this one er'iss one?" The word "southern" is often pronounced with . In the area north of the Regional Municipality of York and south of Parry Sound, notably among those who were born in the surrounding communities, the cutting down of syllables and consonants often heard, e.g. "probably" is reduced to "prolly" or "probly" when used as a response. In Greater Toronto, the diphthong tends to be fronted (as a result the word "about" is pronounced as ). The Greater Toronto Area is linguistically diverse, with 43 percent of its people having a mother tongue other than English. As a result Toronto English has distinctly more variability than Inland Canada.
In Eastern Ontario, Canadian raising is not as strong as it is in the rest of the province. In Prescott and Russell, parts of Stormont-Dundas-Glengarry and Eastern Ottawa, French accents are often mixed with English ones due to the high Franco-Ontarian population there. In Lanark County, Western Ottawa and Leeds-Grenville and the rest of Stormont-Dundas-Glengarry, the accent spoken is nearly identical to that spoken in Central Ontario and the Quinte area.
A linguistic enclave has also formed in the Ottawa Valley, heavily influenced by original Scottish, Irish, and German settlers, and existing along the Ontario-Quebec boundary, which has its own distinct accent known as the Ottawa Valley twang (or brogue). Phonetically, the Ottawa Valley twang is characterized by the lack of Canadian raising as well as the cot–caught merger, two common elements of mainstream Canadian English. This accent is quite rare in the region today.
Quebec.
English is a minority language in Quebec (with French the majority), but has many speakers in Montreal, the Eastern Townships and in the Gatineau-Ottawa region. A person whose mother tongue is English and who still speaks English is called an "Anglophone", versus a "Francophone", or French speaker.
Many people in Montreal distinguish between words like "marry" versus "merry" and "parish" versus "perish", which are homophones to most other speakers of Canadian English. Quebec Anglophones generally pronounce French street names in Montreal as French words. "Pie IX" Boulevard is pronounced as in French: not as "pie nine" but as (compare French /pi.nœf/). On the other hand, Anglophones pronounce the final "d" as in "Bernard" and "Bouchard"; the word "Montreal" is pronounced as an English word and "Rue Lambert-Closse" is known as "Clossy Street" (vs French /klɔs/). In the city of Montreal, especially in some of the western suburbs like Côte-St-Luc and Hampstead, there is a strong Jewish influence in the English spoken in those areas. A large wave of Jewish immigration from Eastern Europe and the former Soviet Union before and after World War II is also evident today. Their English has a strong Yiddish influence, and there are some similarities to English spoken in New York. Words used mainly in Quebec and especially in Montreal are: "stage" for "apprenticeship" or "internship", "copybook" for a notebook, "dépanneur" or "dep" for a convenience store, and "guichet" for an ABM/ATM. It is also common for Anglophones, particularly those of Greek or Italian descent, to use translated French words instead of common English equivalents such as "open" and "close" for "on" and "off" or "Open the lights, please" for "Turn on the lights, please".
West.
Western Canadian English describes the English spoken in the four most western provinces—British Columbia, Alberta, Saskatchewan, and Manitoba. British Columbia, in particular is a sub-zone on the lexical level. Phonetically, Western Canadian English has much more raising and much less than further east, and Canadian raised is further back.
British Columbia.
British Columbia English shares dialect features with both Standard Canadian English and the American Pacific Northwest English. In Vancouver, speakers exhibit more vowel retraction of before nasals than people from Toronto, and this retraction may become a regional marker of West Coast English. raising (found in words such as beg, leg, and peg) and raising (found words such as bag, lag and rag), a prominent feature in Northwestern American speakers, is also found in Vancouver speakers, causing "beg" to sound like the first syllable of "bagel" and "bag" to be similar. In the past, the ANAE reported that Vancouverites' participation in the Canadian raising of was questionable, but nowadays they tend to raise both and . The "o" in such words as "holy, goal, load, know," etc. is pronounced as a close-mid back rounded vowel, , but not as rounded as in the Prairies where there are strong Scandinavian, Slavic and German influences, which can lend to a more stereotypical "Canadian" accent.
Finally, there is also the /t/ sound which according to Gregg (2016), "with many [Vancouver] speakers [is] intrusive between /l/ or /n/ and /s/ in words like sense , Wilson /wɪltsən/ [and] also /'ɒltsoʊ/ ".
Saskatchewan.
English in Saskatchewan has its pool of phonetic features shared with other provinces used by certain demographics. For instance, it has the consonant variables /ntV/ and /VtV/, the latter being a common feature of North American English and is defined as the intervoicing of /t/ between vowels. Meanwhile, /ntV/ "frequently occurs in words such as "centre" and "twenty" where /t/ follows the alveolar nasal /n/ and precedes an unstressed vowel". According to Nylvek (1992), both variables of /t/ are generally more often used by younger male over older female speakers.
Grammar.
There are a handful of syntactical practices unique to Canadian English. When writing, Canadians may start a sentence with "As well", in the sense of "in addition"; this construction is a Canadianism.
North American English prefers "have got" to "have" to denote possession or obligation (as in "I've got a car" vs. "I have a car"); Canadian English differs from American English in tending to eschew plain "got" ("I got a car"), which is a common third option in informal US English.
The grammatical construction ""be done" something" means roughly ""have/has finished" something". For example, "I am done my homework" and "The dog is done dinner" are genuine sentences in this dialect, respectively meaning "I have finished my homework" and "The dog has finished dinner". Another example, "Let's start after you're done all the coffee", means "Let's start after you've finished all the coffee". This is not exactly the same as the standard construction ""to be done with" something", since "She is done the computer" can only mean "She is done with the computer" in one sense: "She has finished (building) the computer".
Date and time notation.
Date and time notation in Canadian English is a mixture of British and American practices. The date can be written in the form of either "" or "1 July 2017"; the latter is common in more formal writing and bilingual contexts. The Government of Canada only recommends writing all-numeric dates in the form of YYYY-MM-DD (e.g. 2017-07-01), following ISO 8601. Nonetheless, the traditional DD/MM/YY and MM/DD/YY systems remain in everyday use, which can be interpreted in multiple ways: 01/07/17 can mean either 1 July 2017 or 7 January 2017. Private members' bills have repeatedly attempted to clarify the situation. In business communication and filing systems the YYMMDD is used to assist in automatic ordering of electronic files.
The government also recommends use of the 24-hour clock, which is widely used in contexts such as transportation schedules, parking meters, and data transmission. Many speakers of English use the 12-hour clock in everyday speech, even when reading from a 24-hour display, similar to the use of the 24-hour clock in the United Kingdom.
Vocabulary.
Where Canadian English shares vocabulary with other English dialects, it tends to share most with American English, but also has many non-American terms distinctively shared instead with Britain. British and American terms also can coexist in Canadian English to various extents, sometimes with new nuances in meaning; a classic example is (British) often used interchangeably with (American), though, in Canadian speech, the latter can more narrowly mean a trip elsewhere and the former can mean general time off work. In addition, the vocabulary of Canadian English also features some words that are seldom (if ever) found elsewhere. A good resource for these and other words is "A Dictionary of Canadianisms on Historical Principles", which is currently being revised at the University of British Columbia in Vancouver, British Columbia. The Canadian public appears to take interest in unique "Canadianisms": words that are distinctively characteristic of Canadian English—though perhaps not exclusive to Canada; there is some disagreement about the extent to which "Canadianism" means a term actually unique to Canada, with such an understanding possibly overstated by the popular media. As a member of the Commonwealth of Nations, Canada shares many items of institutional terminology and professional designations with the countries of the former British Empire—for example, , for a police officer of the lowest rank, and .
Regional variation.
While Canadian English has vocabulary that distinguishes it from other varieties of English across the world, there is significant regional variation in its lexis within Canada as well. A balanced cross-continental sample of 1800 Canadians and 360 Americans the Canada and the USA is the result of Boberg's North American Regional Vocabulary Survey (NARVS), a questionnaire employed by Boberg from 1999–2007 that sought out lexical items that vary regionally within Canada. Six regions were identified in the NARVS data collection: The West, which includes British Columbia and the Prairies; Ontario; Quebec, which represents data from Montreal mostly; New Brunswick and Nova Scotia; Prince Edward Island; and Newfoundland. Many regional differences in the lexis are item-specific. For example, one of these items has to do with the nationally enjoyed meal of pizza, and more specifically, the term used to refer to a pizza that features all available toppings. While Atlantic Canada refers to this order as ‘the works,’ the majority term used from eastern Ontario to the West Coast is deluxe, and terms such as 'all-dressed' and 'everything-on-it' are used in Quebec and Toronto, respectively. Other examples include the regionally varied usage of running shoes/runners/sneakers to describe athletic shoes, and notebook/scribbler/cahier to describe any type of plain note-pad. Despite the regional variation of vocabulary items within Canada, the lexis of Canadian English still maintains greater commonality between its own regions than it does with American English or British English.
Quebec.
Quebec recognizes French as its primary language. As a result, English has no official status in Quebec and is not used often in the public sphere. Although, in more metropolitan areas such as Montreal or Quebec City, it is not uncommon to see English media in public, such as in advertisements and store-fronts. Also, the provincial government must officially be referred to as the "Gouvernement du Québec", regardless of the language being used by the speaker. While the lexical catalog of Quebec English contains items influenced or borrowed by French, the influence of the dominant French language on Quebec English is marginal. The francophone dominance in Quebec makes the province a linguistic anomaly within Canada, where English maintains a negligible role in government and public domains. The French influence on Quebec English operates through five distinct processes, as identified by Charles Boberg: elective direct lexical transfer of non-English words (e.g., garderie for daycare), imposed direct lexical transfer of non-English words, for example, SAQ for "Société des alcools du Québec", loan translation/calques such as 'all-dressed' for the French equivalent 'toute garnie'. Semantic shifts of existing English words, like 'magasin' for 'store', in addition to syntactic influences; e.g, "we're living here three years" instead of the English "we've been living here for three years". Although Quebec English differs from other Canadian regional lexes due to its special contact with French, it still shares some similarities with the lexis of other Canadian regions. For instance, the use of lexical items such as all-dressed has been successfully transferred to most other Canadian regional lexes.
Ontario.
Southern Ontario was initially settled by white Protestants, with the late 19th century witnessing the migration of white Protestant settlers from Ontario to western Canada following the suppression of the Métis opposition. This migration facilitated the transplantation of the Ontario accent and the emergence of a homogeneous Canadian English dialect. Distinctive to Ontario are Canadianisms such as concession roads, which refer to roads that transect a township, dew-worm, which refers to an earthworm, and fire-reel, which refers to a fire truck. Walter S. Avis identified several linguistic features characteristic of Ontarians, including their preference for the word vacation, rather than holiday—which is considered more British English—and sack over paper bag. While there may be numerous such lexical differences in the speech of provincial and national borderers, Avis asserts that these are relatively minor compared to the linguistic features held in common. Furthermore, Avis suggests that the difference between American English and Ontario English is relatively small near the border due to their close proximity. The historical settlement patterns of southern Ontario, coupled with linguistic research, indicate the existence of distinctively Ontarian lexical items. However, Ontario maintains greater similarities with other Canadian regions than it does with the neighbouring American English and its regional variations.
Northern Ontario English has several distinct qualities stemming from its large Franco-Ontarian population. As a result several French and English words are used interchangeably. A number of phrases and expressions may also be found in Northern Ontario that are not present in the rest of the province, such as the use of "camp" for a summer home where Southern Ontario speakers would idiomatically use cottage.
In the mid to late 90s, certain words from Jamaican Patois, Arabic and Somali were incorporated into the local variety of English by Toronto youth, especially in immigrant communities, thus giving rise to Toronto slang. These examples included words such as "mandem", "styll", "wallahi", "wasteman", and "yute".
Prairies (Manitoba, Saskatchewan and Alberta).
The Prairies, consisting of Manitoba, Saskatchewan, and Alberta, have their own lexical features. The linguistic legacy from the settlement patterns in these regions, along with the Indigenous communities, specifically the large Métis population in Saskatchewan and Manitoba also carry certain linguistic traits inherited from the French, Indigenous, and Celtic forebears. The linguistic features brought by Ukrainian, German, and Mennonite populations in the Saskatchewan Valley of Saskatchewan and Red River Valley of Manitoba have also influenced the lexis of the Prairies. Some terms are derived from these groups and some are formed within the region by locals throughout time. An example of the former is the high-profile variable bunnyhug, a term for a hooded sweatshirt in Saskatchewan. As discussed in The Dictionary of Canadianisms on Historical Principles, bunnyhug is purposely and commonly used by young Saskatchewan speakers to indicate a sense of provincial identity, and is referred to as a Saskatchewanism. It should be further noted that it is assumed based on circumstantial evidence that teenagers played a crucial and special role in the spread and adoption of the term bunnyhug for hooded sweatshirts. Across Saskatchewan, Alberta, and Manitoba there are other terms consistent in or throughout the 3 provinces. Biffed is a term for falling, such as "John biffed it over there". Pickerel is Manitoba's official fish, also known as Walleye. Play structure is used to describe a playground for children consisting of monkey bars, slides, etc.
Atlantic Canada (New Brunswick & Nova Scotia, PEI, Newfoundland).
Canada's Atlantic provinces were the first part of North America to be explored by Europeans. The Atlantic provinces, historically and collectively called the Maritimes, consist of New Brunswick, Nova Scotia, Prince Edward Island. Newfoundland and Labrador, which is not part of the Maritimes, is also part of Atlantic Canada. The historical immigrants from Europe have shaped cultures and lexical catalogs across the regions of Atlantic Canada that reflect British, Scottish, Gaelic, and French customs. The vernacular variations of English spoken in the Atlantic region of Canada. Newfoundland and Labrador English (NLE) possesses unique vocabulary compared to standard Canadian English. The Dictionary of Newfoundland English covers the vocabulary common to Newfoundlanders, such as Newfoundland "screech rum", a Newfoundland-specific brand of rum; mummering, referring to a Christmas tradition; and gut-foundered, meaning starving or fastened. Nova Scotia also is home to its own vocabulary. The term "Sobeys bag", used to refer to a plastic grocery bag, originates from the Nova Scotian grocery store chain Sobeys. Similarly, Prince Edward Island has its own vocabulary and dictionary. For example, angishore refers to a fisherman who is too lazy to fish and likely is a lexical item originating from Irish Gaelic settlers in Newfoundland. Sarah Sawler, a writer from Halifax, highlights terms that are common to Maritimes, such as dooryard for front yard, owly for when someone is angry or irritable, and biff for throw.
Education.
The term "college", which refers to post-secondary education in general in the US, refers in Canada to either a post-secondary technical or vocational institution, or to one of the colleges that exist as federated schools within some Canadian universities. Most often, a "college" is a community college, not a university. It may also refer to a CEGEP in Quebec. In Canada, might denote someone obtaining a diploma in business management, an equivalent of this would be an associate degree in the United States. In contrast, is the term for someone earning a bachelor's degree, typically at a post-secondary university institution. Hence, the term in Canada does not have the same meaning as , unless the speaker or context clarifies the specific level of post-secondary education that is meant.
Within the public school system the chief administrator of a school is generally "the principal", as in the United States, but the term is not used preceding their name, i.e., "Principal Smith". The assistant to the principal is not titled as "assistant principal", but rather as "vice-principal", although the former is not unknown. This usage is identical to that in Northern Ireland.
Canadian universities publish "calendars" or "schedules", not "catalogs" as in the US. Canadian students "write" or "take" exams (in the US, students generally "take" exams while teachers "write" them); they rarely "sit" them (standard British usage). Those who supervise students during an exam are sometimes called "invigilators" as in Britain, or sometimes "proctors" as in the US; usage may depend on the region or even the individual institution.
Successive years of school are usually referred to as "grade one", "grade two", and so on. In Quebec, Francophone speakers will often say "primary one", "primary two" as a direct translation from the French, and so on; while Anglophones will say "grade one", "grade two". These terms are comparable with the American "first grade, second grade" (which is used in Canada, yet is rare), English/Welsh "Year 1, Year 2", Scottish/Northern Irish "Primary 1, Primary 2" or "P1, P2", and Southern Irish "First Class, Second Class" and so on. The year of school before grade 1 is usually called "Kindergarten", with the exception of Nova Scotia, where it is called "grade primary". In addition, children younger than the public school start age may attend "pre-primary", although this is a newer addition to the Nova Scotian public-school system, and is not used frequently elsewhere.
In parts of the US, the four years of high school are termed the freshman, sophomore, junior, and senior years (terms also used for college years); in Canada, the specific levels are used instead, such as "grade nine" in lieu of freshman. As for higher education, only the term "freshman" (often reduced to "frosh") has some currency in Canada. Moreover, some Canadian public-school systems have adolescents start high-school in "Grade 10" or, the sophomore year, although, this can depend on the province and even vary within a school-district. The American usages "sophomore", "junior" and "senior" are not used in Canadian university terminology, or in speech. The specific high-school grades and university years are therefore stated and individualized; for example, "Sarah is starting Grade 10 this year", which Americans would state as "Sarah is going to be a sophomore this year". Similarly in the post-secondary education context, "Francois is in second year of university" rather than the Americanism "Francois is a sophomore in college".
Canadian students use the term "marks" (more common in England) or "grades" (more common in the US) to refer to their results. Usage is mixed, although "marks" more commonly refer to a single score whereas "grades" often refers to the cumulative score in that class.
Units of measurement.
Unlike in the United States, use of metric units within a majority of industries is standard in Canada, as a result of the partial national adoption of the metric system during the mid-to-late 1970s that was eventually stalled; this has spawned some colloquial usages such as "klick" for kilometre.
Nonetheless, US units are still used in many situations. Imperial volumes are also used, albeit rarely—although many Canadians and Americans mistakenly conflate the measurement systems despite their slight differences from each other (e.g. US, Canadian, and metric cups are 237ml, 227ml, and 250mL respectively).
For example, most English Canadians state their weight and height in pounds and feet/inches, respectively. This is also the case for many Quebec Francophones. Distances while playing golf are always marked and discussed in yards, though official scorecards may also show metres. Temperatures for cooking or pools are often given in Fahrenheit, while the weather is given in Celsius. Directions in the Prairie provinces are sometimes given using miles, because the country roads generally follow the mile-based grid of the Dominion Land Survey. Motor vehicle speed limits are measured in kilometres per hour.
Canadians measure floor areas, both residential and commercial, in square feet or square metres. Land area is in square feet, square metres, acres or hectares. Fuel efficiency is more often discussed in the metric L/100 km than miles per US gallon. The Letter paper size of 8.5 inches × 11 inches is used instead of the international and metric equivalent A4 size of 210 mm × 297 mm. Beer cans are 355mL (12 US oz), while beer bottles are typically 341mL (12 Imperial oz), and draft beer is sold in various units; US or Imperial oz, US or Imperial pint, or occasionally mL.
Building materials are used in soft conversions of imperial sizes, but often purchased in relation to the imperial sizes. For example, 8-inch concrete masonry units can be referred to as an 8-inch CMU or 190 CMU. The actual material used in the US and Canada is the same.
Transport.
"Expressway" may also refer to a limited-access road that has control of access but has at-grade junctions, railway crossings (for example, the Harbour Expressway in Thunder Bay.) Sometimes the term "Parkway" is also used (for example, the Hanlon Parkway in Guelph). In Saskatchewan, the term 'grid road' is used to refer to minor highways or rural roads, usually gravel, referring to the 'grid' upon which they were originally designed. In Quebec, freeways and expressways are called autoroutes.
In Alberta, the generic "Trail" is often used to describe a freeway, expressway or major urban street (for example, Deerfoot Trail, Macleod Trail or Crowchild Trail in Calgary, Yellowhead Trail, Victoria Trail or Mark Messier/St.Albert Trail in Edmonton). The British term "motorway" is not used. The American terms "turnpike" and "tollway" for a toll road are not common. The term "throughway" or "thruway" was used for first tolled limited-access highways (for example, the Deas Island Throughway, now Highway 99, from Vancouver, BC, to Blaine, Washington, USA or the Saint John Throughway (Highway 1) in Saint John, NB), but this term is not common anymore. In everyday speech, when a particular roadway is not being specified, the term "highway" is generally or exclusively used.
Law.
Lawyers in all parts of Canada, except Quebec, which has its own civil law system, are called "barristers and solicitors" because any lawyer licensed in any of the common law provinces and territories must pass bar exams for, and is permitted to engage in, both types of legal practice in contrast to other common-law jurisdictions such as England, Wales and Ireland where the two are traditionally separated (i.e., Canada has a fused legal profession). The words "lawyer" and "counsel" (not "counsellor") predominate in everyday contexts; the word "attorney" refers to any personal representative. Canadian lawyers generally do not refer to themselves as "attorneys", a term that is common in the United States.
The equivalent of an American "district attorney", meaning the barrister representing the state in criminal proceedings, is called a "crown attorney" (in Ontario), "crown counsel" (in British Columbia), "crown prosecutor" or "the crown", on account of Canada's status as a constitutional monarchy in which the Crown is the locus of state power.
The words "advocate" and "notary" – two distinct professions in Quebec civil law – are used to refer to that province's approximate equivalents of barrister and solicitor, respectively. It is not uncommon for English-speaking advocates in Quebec to refer to themselves in English as "barrister(s) and solicitor(s)", as most advocates chiefly perform what would traditionally be known as "solicitor's work", while only a minority of advocates actually appear in court. In Canada's common law provinces and territories, the word "notary" means strictly a notary public.
Within the Canadian legal community itself, the word "solicitor" is often used to refer to any Canadian lawyer in general (much like the way the word "attorney" is used in the United States to refer to any American lawyer in general). Despite the conceptual distinction between "barrister" and "solicitor", Canadian court documents would contain a phrase such as ""John Smith, "solicitor" for the Plaintiff"" even though "John Smith" may well himself be the barrister who argues the case in court. In a letter introducing him/herself to an opposing lawyer, a Canadian lawyer normally writes something like ""I am the "solicitor" for Mr. Tom Jones."
The word "litigator" is also used by lawyers to refer to a fellow lawyer who specializes in lawsuits even though the more traditional word "barrister" is still employed to denote the same specialization.
Judges of Canada's superior courts, which exist at the provincial and territorial levels, are traditionally addressed as "My Lord" or "My Lady". This varies by jurisdiction, and some superior court judges prefer the titles "Mister Justice" or "Madam Justice" to "Lordship".
Masters are addressed as "Mr. Master" or simply "Sir." In British Columbia, masters are addressed as "Your Honour."
Judges of provincial or inferior courts are traditionally referred to in person as "Your Honour". Judges of the Supreme Court of Canada and of the federal-level courts prefer the use of "Mister/Madam (Chief) Justice". Justices of The Peace are addressed as "Your Worship". "Your Honour" is also the correct form of address for a Lieutenant Governor.
A serious crime is called an indictable offence, while a less-serious crime is called a summary conviction offence. The older words felony and misdemeanour, which are still used in the United States, are not used in Canada's current "Criminal Code" (R.S.C. 1985, c. C-46) or by today's Canadian legal system. As noted throughout the "Criminal Code", a person accused of a crime is called "the accused" and not "the defendant", a term used instead in civil lawsuits.
In Canada, "visible minority" refers to a non-aboriginal person or group visibly not one of the majority race in a given population. The term comes from the "Canadian Employment Equity Act", which defines such people as "persons, other than Aboriginal people, who are non-Caucasian in race or non-white in colour." The term is used as a demographic category by Statistics Canada. The qualifier "visible" is used to distinguish such minorities from the "invisible" minorities determined by language (English vs. French) and certain distinctions in religion (Catholics vs. Protestants).
A county in British Columbia means only a regional jurisdiction of the courts and justice system and is not otherwise connected to governance as with counties in other provinces and in the United States. The rough equivalent to "county" as used elsewhere is a "Regional District".
Places.
Distinctive Canadianisms are:
Daily life.
Terms common in Canada, Britain, Ireland, Australia and other Commonwealth nations but less frequent or nonexistent in the United States are:
The following are more or less distinctively Canadian:
Apparel.
The following are common in Canada, but not in the United States or the United Kingdom.
Informal speech.
One of the most distinctive Canadian phrases is the spoken interrogation or tag "eh". The only usage of "eh" exclusive to Canada, according to the "Canadian Oxford Dictionary", is for "ascertaining the comprehension, continued interest, agreement, etc., of the person or persons addressed" as in, "It's four kilometres away, eh, so I have to go by bike." In that case, "eh?" is used to confirm the attention of the listener and to invite a supportive noise such as "mm" or "oh" or "okay". This usage is also common in Queensland, Australia and New Zealand. Other uses of "eh" – for instance, in place of "huh?" or "what?" meaning "please repeat or say again" – are also found in parts of the British Isles and Australia. It is common in Northern/Central Ontario, the Maritimes and the Prairie provinces. The word "eh" is used quite frequently in the North Central dialect, so a Canadian accent is often perceived in people from North Dakota, Michigan, Minnesota, and Wisconsin.
A "rubber" in the US and Canada is slang for a condom. In Canada, it sometimes means an eraser (as in the United Kingdom and Ireland).
The word "bum" can refer either to the buttocks (as in Britain), or to a homeless person (as in the US). The "buttocks" sense does not have the indecent character it retains in British use, as it and "butt" are commonly used as a polite or childish euphemism for ruder words such as "arse" (commonly used in Atlantic Canada and among older people in Ontario and to the west) or "ass", or "mitiss" (used in the Prairie Provinces, especially in northern and central Saskatchewan; probably originally a Cree loanword). Older Canadians may see "bum" as more polite than "butt", which before the 1980s was often considered rude.
Similarly the word "pissed" can refer either to being drunk (as in Britain), or being angry (as in the US), though anger is more often said as "pissed off", while "piss drunk" or "pissed up" is said to describe inebriation (though "piss drunk" is sometimes also used in the US, especially in the northern states).
The term "Canuck" simply means "Canadian" in its demonymic form, and, as a term used even by Canadians themselves, it is not considered derogatory. (In the 19th century and early 20th century it tended to refer to French-Canadians.) The only Canadian-built version of the popular World War I-era American Curtiss JN-4 "Jenny" training biplane aircraft, the JN-4C, 1,260 of which were built, got the "Canuck" nickname; so did another aircraft, the Fleet Model 80, built from the mid-1940s until the late 1950s. The nickname Janey Canuck was used by Anglophone women's rights writer Emily Murphy in the 1920s and the "Johnny Canuck" comic book character of the 1940s. Throughout the 1970s, Canada's winning World Cup men's downhill ski team was called the "Crazy Canucks" for their fearlessness on the slopes. It is also the name of the Vancouver Canucks, the National Hockey League team of Vancouver, British Columbia.
The term "hoser", popularized by Bob & Doug McKenzie, typically refers to an uncouth, beer-swilling male and is a euphemism for "loser" coming from the earlier days of hockey played on an outdoor rink and the losing team would have to hose down the ice after the game so it froze smooth.
A "Newf" or "Newfie" is someone from Newfoundland and Labrador; sometimes considered derogatory. In Newfoundland, the term "Mainlander" refers to any Canadian (sometimes American, occasionally Labradorian) not from the island of Newfoundland. "Mainlander" is also occasionally used derogatorily.
In the Maritimes, a "Caper" or "Cape Bretoner" is someone from Cape Breton Island, a "Bluenoser" is someone with a thick, usually southern Nova Scotia accent or as a general term for a Nova Scotian (including Cape Bretoners), while an "Islander" is someone from Prince Edward Island (the same term is used in British Columbia for people from Vancouver Island, or the numerous islands along it). A "Haligonian" refers to someone from the city of Halifax.
Cape Bretoners and Newfies (from Newfoundland and Labrador) often have similar slang. "Barmp" is often used as the sound a car horn makes, example: "He cut me off so I barmped the horn at him". When saying "B'y", while sounds like the traditional farewell, it is a syncopated shortening of the word "boy", referring to a person, example: "How's it goin, b'y?". Another slang that is commonly used is "doohickey" which means an object, example: "Pass me that doohickey over there". When an individual uses the word "biffed", they mean that they threw something. Example: "I got frustrated so I biffed it across the room".
Survey and research methodology.
Canadian English dialectology examines Canadian English through the use of written surveys due to the vastness of the country and the difficulties of conducting face-to-face interviews on a nationwide level. The historical overview of written surveys in Canadian-English dialectology includes Avis's study of speech differences among the Ontario-United States borders through the use of questionnaires. Another example is the Survey of Canadian English directed by Scargill. A more recent example would be Nylvek's survey of Saskatchewan English and Chambers' trans-Canada dialect questionnaires.
Attitudes.
An attitude study in the late 1970s revealed a positive attitude toward Canadian linguistic features. Features include front vowel merger before/r/, low-back vowel merger, Canadian Raising, and Canadian lexical items. Still, the sample group in British Columbia showed a preference for UK and US English.
This attitude sees a change years later. A survey about attitudes towards CE was conducted with a diverse sample group in Vancouver, BC, in 2009. Among 429 Vancouverites, 81.1% believe there is a Canadian way of speaking English, 72.9% can tell CanE speakers from American English speakers, 69.1% consider CanE a part of their Canadian identity, and 74.1% think CanE should be taught in schools. Due to the unavailability of free and easy-to-access CanE dictionaries, many Canadian opt for other non-Canadian English dictionaries today. Historically, American, British, and Irish texts are used in Canadian schools for the most part; even though Canadian reference work was written and became available in the 1960s, they were never preferred as teaching material.
A preference change can be seen at the end of higher education in Canada. At the University of Toronto's Graduate English department, "Canadian English" and a "consistent spelling" are officially "the standard for all Ph.D. dissertations," with the "Canadian Oxford English Dictionary" as the official guideline. However, there is no mention of which grammar guide was to be followed because there was never a solid standard developed for spelling and grammar.
In 2011, just under 21.5 million Canadians, representing 65% of the population, spoke English most of the time at home, while 58% declared it their mother language. English is the major language everywhere in Canada except Quebec, and most Canadians (85%) can speak English. While English is not the preferred language in Quebec, 36.1% of the Québécois can speak English. Nationally, Francophones are five times more likely to speak English than Anglophones are to speak French – 44% and 9% respectively. Only 3.2% of Canada's English-speaking population resides in Quebec—mostly in Montreal.
A study conducted in 2002 inquired Canadians from Ontario and Alberta about the "pleasantness" and "correctness" of different varieties of Canadian English based on province. Albertans and Ontarians all seem to rate their English and BC English in the top three. However, both hold a low opinion of Quebec English. Unlike the assumption that Toronto or Ontario English would be the most prestigious considering these regions are the most economically robust, BC had the best public opinion regarding pleasantness and correctness among the participants.
Jaan Lilles argues in an essay for "English Today" that there is no variety of "Canadian English." According to Lilles, Canadian English is simply not a "useful fiction". He goes on to argue that too often supposedly unique features of Canadian speakers, such as certain lexical terms such as "muskeg" are artificially exaggerated to distinguish Canadian speech primarily from that found in the United States. Lilles was heavily critiqued in the next issue of "English Today" by lexicographer Fraser Sutherland and others. According to Stefan Dollinger, Lilles' paper "is not a paper based on any data or other new information but more of a pamphlet – so much so that it should not have been published without a public critique". He continues, "The paper is insightful for different reasons: it is a powerful testimony of personal anecdote and opinion [...]. As an opinion piece, it offers a good debating case." As a linguistic account, however, it "essentializes a prior state, before Canada was an independent political entity."
Further reading.
Dollinger, Stefan (2015). The Written Questionnaire in Social Dialectology: History, Theory, Practice. Amsterdam/Philadelphia: Benjamins. The book's examples are exclusive taken from Canadian English and represent one of the more extensive collections of variables for Canadian English.
|
6343
|
1300666534
|
https://en.wikipedia.org/wiki?curid=6343
|
Czech language
|
Czech ( ; ), historically known as Bohemian ( ; ), is a West Slavic language of the Czech–Slovak group, written in Latin script. Spoken by over 12 million people including second language speakers, it serves as the official language of the Czech Republic. Czech is closely related to Slovak, to the point of high mutual intelligibility, as well as to Polish to a lesser degree. Czech is a fusional language with a rich system of morphology and relatively flexible word order. Its vocabulary has been extensively influenced by Latin and German.
The Czech–Slovak group developed within West Slavic in the high medieval period, and the standardization of Czech and Slovak within the Czech–Slovak dialect continuum emerged in the early modern period. In the later 18th to mid-19th century, the modern written standard became codified in the context of the Czech National Revival. The most widely spoken non-standard variety, known as Common Czech, is based on the vernacular of Prague, but is now spoken as an interdialect throughout most of Bohemia. The Moravian dialects spoken in Moravia and Czech Silesia are considerably more varied than the dialects of Bohemia.
Czech has a moderately-sized phoneme inventory, comprising ten monophthongs, three diphthongs and 25 consonants (divided into "hard", "neutral" and "soft" categories). Words may contain complicated consonant clusters or lack vowels altogether. Czech has a raised alveolar trill, which is known to occur as a phoneme in only a few other languages, represented by the grapheme "ř".
Classification.
Czech is a member of the West Slavic sub-branch of the Slavic branch of the Indo-European language family. This branch includes Polish, Kashubian, Upper and Lower Sorbian and Slovak. Slovak is the most closely related language to Czech, followed by Polish and Silesian.
The West Slavic languages are spoken in Central Europe. Czech is distinguished from other West Slavic languages by a more-restricted distinction between "hard" and "soft" consonants (see Phonology below).
History.
Medieval/Old Czech.
The term "Old Czech" is applied to the period predating the 16th century, with the earliest records of the high medieval period also classified as "early Old Czech", but the term "Medieval Czech" is also used. The function of the written language was initially performed by Old Slavonic written in Glagolitic, later by Latin written in Latin script.
Around the 7th century, the Slavic expansion reached Central Europe, settling on the eastern fringes of the Frankish Empire. The West Slavic polity of Great Moravia formed by the 9th century. The Christianization of Bohemia took place during the 9th and 10th centuries. The diversification of the Czech-Slovak group within West Slavic began around that time, marked among other things by its use of the voiced velar fricative consonant (/ɣ/) and consistent stress on the first syllable.
The Bohemian (Czech) language is first recorded in writing in glosses and short notes during the 12th to 13th centuries. Literary works written in Czech appear in the late 13th and early 14th century and administrative documents first appear towards the late 14th century. The first complete Bible translation, the Leskovec-Dresden Bible, also dates to this period. Old Czech texts, including poetry and cookbooks, were also produced outside universities.
Literary activity becomes widespread in the early 15th century in the context of the Bohemian Reformation. Jan Hus contributed significantly to the standardization of Czech orthography, advocated for widespread literacy among Czech commoners (particularly in religion) and made early efforts to model written Czech after the spoken language.
Early Modern Czech.
There was no standardization distinguishing between Czech and Slovak prior to the 15th century. In the 16th century, the division between Czech and Slovak becomes apparent, marking the confessional division between Lutheran Protestants in Slovakia using Czech orthography and Catholics, especially Slovak Jesuits, beginning to use a separate Slovak orthography based on Western Slovak dialects.
The publication of the Kralice Bible between 1579 and 1593 (the first complete Czech translation of the Bible from the original languages) became very important for standardization of the Czech language in the following centuries as it was used as a model for the standard language.
In 1615, the Bohemian "diet" tried to declare Czech to be the only official language of the kingdom. After the Bohemian Revolt (of predominantly Protestant aristocracy) which was defeated by the Habsburgs in 1620, the Protestant intellectuals had to leave the country. This emigration together with other consequences of the Thirty Years' War had a negative impact on the further use of the Czech language. In 1627, Czech and German became official languages of the Kingdom of Bohemia and in the 18th century German became dominant in Bohemia and Moravia, especially among the upper classes.
Modern Czech.
Modern standard Czech originates in standardization efforts of the 18th century. By then the language had developed a literary tradition, and since then it has changed little; journals from that period contain no substantial differences from modern standard Czech, and contemporary Czechs can understand them with little difficulty. At some point before the 18th century, the Czech language abandoned a distinction between phonemic /l/ and /ʎ/ which survives in Slovak.
With the beginning of the national revival of the mid-18th century, Czech historians began to emphasize their people's accomplishments from the 15th through 17th centuries, rebelling against the Counter-Reformation (the Habsburg re-catholization efforts which had denigrated Czech and other non-Latin languages). Czech philologists studied sixteenth-century texts and advocated the return of the language to high culture. This period is known as the Czech National Revival (or Renaissance).
During the national revival, in 1809 linguist and historian Josef Dobrovský released a German-language grammar of Old Czech entitled "Ausführliches Lehrgebäude der böhmischen Sprache" ('Comprehensive Doctrine of the Bohemian Language'). Dobrovský had intended his book to be descriptive, and did not think Czech had a realistic chance of returning as a major language. However, Josef Jungmann and other revivalists used Dobrovský's book to advocate for a Czech linguistic revival. Changes during this time included spelling reform (notably, "í" in place of the former "j" and "j" in place of "g"), the use of "t" (rather than "ti") to end infinitive verbs and the non-capitalization of nouns (which had been a late borrowing from German). These changes differentiated Czech from Slovak. Modern scholars disagree about whether the conservative revivalists were motivated by nationalism or considered contemporary spoken Czech unsuitable for formal, widespread use.
Adherence to historical patterns was later relaxed and standard Czech adopted a number of features from Common Czech (a widespread informal interdialectal variety), such as leaving some proper nouns undeclined. This has resulted in a relatively high level of homogeneity among all varieties of the language.
Geographic distribution.
Czech is spoken by about 10 million residents of the Czech Republic. A Eurobarometer survey conducted from January to March 2012 found that the first language of 98 percent of Czech citizens was Czech, the third-highest proportion of a population in the European Union (behind Greece and Hungary).
As the official language of the Czech Republic (a member of the European Union since 2004), Czech is one of the EU's official languages and the 2012 Eurobarometer survey found that Czech was the foreign language most often used in Slovakia. Economist Jonathan van Parys collected data on language knowledge in Europe for the 2012 European Day of Languages. The five countries with the greatest use of Czech were the Czech Republic (98.77 percent), Slovakia (24.86 percent), Portugal (1.93 percent), Poland (0.98 percent) and Germany (0.47 percent).
Czech speakers in Slovakia primarily live in cities. Since it is a recognized minority language in Slovakia, Slovak citizens who speak only Czech may communicate with the government in their language in the same way that Slovak speakers in the Czech Republic also do.
United States.
Immigration of Czechs from Europe to the United States occurred primarily from 1848 to 1914. Czech is a Less Commonly Taught Language in U.S. schools, and is taught at Czech heritage centers. Large communities of Czech Americans live in the states of Texas, Nebraska and Wisconsin. In the 2000 United States Census, Czech was reported as the most common language spoken at home (besides English) in Valley, Butler and Saunders Counties, Nebraska and Republic County, Kansas. With the exception of Spanish (the non-English language most commonly spoken at home nationwide), Czech was the most common home language in more than a dozen additional counties in Nebraska, Kansas, Texas, North Dakota and Minnesota. 70,500 Americans spoke Czech as their first language (49th place nationwide, after Turkish and before Swedish).
Phonology.
Vowels.
Standard Czech contains ten basic vowel phonemes, and three diphthongs. The vowels are , and their long counterparts . The diphthongs are ; the last two are found only in loanwords such as "car" and "euro".
In Czech orthography, the vowels are spelled as follows:
The letter indicates that the previous consonant is palatalized (e.g. ). After a labial it represents (e.g. ); but is pronounced /mɲɛ/, cf. ().
Consonants.
The consonant phonemes of Czech and their equivalent letters in Czech orthography are as follows:
Czech consonants are categorized as "hard", "neutral", or "soft":
Hard consonants may not be followed by "i" or "í" in writing, or soft ones by "y" or "ý" (except in loanwords such as "kilogram"). Neutral consonants may take either character. Hard consonants are sometimes known as "strong", and soft ones as "weak". This distinction is also relevant to the declension patterns of nouns, which vary according to whether the final consonant of the noun stem is hard or soft.
Voiced consonants with unvoiced counterparts are unvoiced at the end of a word before a pause, and in consonant clusters voicing assimilation occurs, which matches voicing to the following consonant. The unvoiced counterpart of /ɦ/ is /x/.
The phoneme represented by the letter "ř" (capital "Ř") is very rare among languages and often claimed to be unique to Czech, though it also occurs in some dialects of Kashubian, and formerly occurred in Polish. It represents the raised alveolar non-sonorant trill (IPA: ), a sound somewhere between Czech "r" and "ž" (example: ), and is present in "Dvořák". In unvoiced environments, /r̝/ is realized as its voiceless allophone [r̝̊], a sound somewhere between Czech "r" and "š".
The consonants can be syllabic, acting as syllable nuclei in place of a vowel. "Strč prst skrz krk" ("Stick [your] finger through [your] throat") is a well-known Czech tongue twister using syllabic consonants but no vowels.
Stress.
Each word has primary stress on its first syllable, except for enclitics (minor, monosyllabic, unstressed syllables). In all words of more than two syllables, every odd-numbered syllable receives secondary stress. Stress is unrelated to vowel length; both long and short vowels can be stressed or unstressed. Vowels are never reduced (e.g. to schwa sounds) when unstressed. When a noun is preceded by a monosyllabic preposition, the stress usually moves to the preposition, e.g. "to Prague".
Grammar.
Czech grammar, like that of other Slavic languages, is fusional; its nouns, verbs, and adjectives are inflected by phonological processes to modify their meanings and grammatical functions, and the easily separable affixes characteristic of agglutinative languages are limited.
Czech inflects for case, gender and number in nouns and tense, aspect, mood, person and subject number and gender in verbs.
Parts of speech include adjectives, adverbs, numbers, interrogative words, prepositions, conjunctions and interjections. Adverbs are primarily formed from adjectives by taking the final "ý" or "í" of the base form and replacing it with "e", "ě", "y", or "o". Negative statements are formed by adding the affix "ne-" to the main verb of a clause, with one exception: "je" (he, she or it is) becomes "není".
Sentence and clause structure.
Because Czech uses grammatical case to convey word function in a sentence (instead of relying on word order, as English does), its word order is flexible. As a pro-drop language, in Czech an intransitive sentence can consist of only a verb; information about its subject is encoded in the verb. Enclitics (primarily auxiliary verbs and pronouns) appear in the second syntactic slot of a sentence, after the first stressed unit. The first slot can contain a subject or object, a main form of a verb, an adverb, or a conjunction (except for the light conjunctions "a", "and", "i", "and even" or "ale", "but").
Czech syntax has a subject–verb–object sentence structure. In practice, however, word order is flexible and used to distinguish topic and focus, with the topic or theme (known referents) preceding the focus or rheme (new information) in a sentence; Czech has therefore been described as a topic-prominent language. Although Czech has a periphrastic passive construction (like English), in colloquial style, word-order changes frequently replace the passive voice. For example, to change "Peter killed Paul" to "Paul was killed by Peter" the order of subject and object is inverted: "Petr zabil Pavla" ("Peter killed Paul") becomes "Paul, Peter killed" ("Pavla zabil Petr"). "Pavla" is in the accusative case, the grammatical object of the verb.
A word at the end of a clause is typically emphasized, unless an upward intonation indicates that the sentence is a question:
In parts of Bohemia (including Prague), questions such as "Jí pes bagetu?" without an interrogative word (such as "co", "what" or "kdo", "who") are intoned in a slow rise from low to high, quickly dropping to low on the last word or phrase.
In modern Czech syntax, adjectives precede nouns, with few exceptions. Relative clauses are introduced by relativizers such as the adjective "který", analogous to the English relative pronouns "which", "that" and "who"/"whom". As with other adjectives, it agrees with its associated noun in gender, number and case. Relative clauses follow the noun they modify. The following is a glossed example:
Declension.
In Czech, nouns and adjectives are declined into one of seven grammatical cases which indicate their function in a sentence, two numbers (singular and plural) and three genders (masculine, feminine and neuter). The masculine gender is further divided into animate and inanimate classes.
Case.
A nominative–accusative language, Czech marks subject nouns of transitive and intransitive verbs in the nominative case, which is the form found in dictionaries, and direct objects of transitive verbs are declined in the accusative case. The vocative case is used to address people. The remaining cases (genitive, dative, locative and instrumental) indicate semantic relationships, such as noun adjuncts (genitive), indirect objects (dative), or agents in passive constructions (instrumental). Additionally prepositions and some verbs require their complements to be declined in a certain case. The locative case is only used after prepositions. An adjective's case agrees with that of the noun it modifies. When Czech children learn their language's declension patterns, the cases are referred to by number:
Some prepositions require the nouns they modify to take a particular case. The cases assigned by each preposition are based on the physical (or metaphorical) direction, or location, conveyed by it. For example, "od" (from, away from) and "z" (out of, off) assign the genitive case. Other prepositions take one of several cases, with their meaning dependent on the case; "na" means "on to" or "for" with the accusative case, but "on" with the locative.
This is a glossed example of a sentence using several cases:
Gender.
Czech distinguishes three genders—masculine, feminine, and neuter—and the masculine gender is subdivided into animate and inanimate. With few exceptions, feminine nouns in the nominative case end in "-a", "-e", or a consonant; neuter nouns in "-o", "-e", or "-í", and masculine nouns in a consonant. Adjectives, participles, most pronouns, and the numbers "one" and "two" are marked for gender and agree with the gender of the noun they modify or refer to. Past tense verbs are also marked for gender, agreeing with the gender of the subject, e.g. "dělal" (he did, or made); "dělala" (she did, or made) and "dělalo" (it did, or made). Gender also plays a semantic role; most nouns that describe people and animals, including personal names, have separate masculine and feminine forms which are normally formed by adding a suffix to the stem, for example "Čech" (Czech man) has the feminine form "Češka" (Czech woman).
Nouns of different genders follow different declension patterns. Examples of declension patterns for noun phrases of various genders follow:
Number.
Nouns are also inflected for number, distinguishing between singular and plural. Typical of a Slavic language, Czech cardinal numbers one through four allow the nouns and adjectives they modify to take any case, but numbers over five require subject and direct object noun phrases to be declined in the genitive plural instead of the nominative or accusative, and when used as subjects these phrases take singular verbs. For example:
Numbers decline for case, and the numbers one and two are also inflected for gender. Numbers one through five are shown below as examples. The number one has declension patterns identical to those of the demonstrative pronoun "ten".
Although Czech's grammatical numbers are singular and plural, several residuals of dual forms remain, such as the words "dva" ("two") and "oba" ("both"), which decline the same way. Some nouns for paired body parts use a historical dual form to express plural in some cases: "ruka" (hand)—"ruce" (nominative); "noha" (leg)—"nohama" (instrumental), "nohou" (genitive/locative); "oko" (eye)—"oči", and "ucho" (ear)—"uši". While two of these nouns are neuter in their singular forms, all plural forms are considered feminine; their gender is relevant to their associated adjectives and verbs. These forms are plural semantically, used for any non-singular count, as in "mezi čtyřma očima" (face to face, lit. "among four eyes"). The plural number paradigms of these nouns are a mixture of historical dual and plural forms. For example, "nohy" (legs; nominative/accusative) is a standard plural form of this type of noun.
Verb conjugation.
Czech verbs agree with their subjects in person (first, second or third), number (singular or plural), and in constructions involving participles, which includes the past tense, also in gender. They are conjugated for tense (past, present or future) and mood (indicative, imperative or conditional). For example, the conjugated verb "mluvíme" (we speak) is in the present tense and first-person plural; it is distinguished from other conjugations of the infinitive "mluvit" by its ending, "-íme". The infinitive form of Czech verbs ends in "-t" (archaically, "-ti" or "-ci"). It is the form found in dictionaries and the form that follows auxiliary verbs (for example, "můžu tě slyšet"—"I can "hear" you").
Aspect.
Typical of Slavic languages, Czech marks its verbs for one of two grammatical aspects: perfective and imperfective. Most verbs are part of inflected aspect pairs—for example, "koupit" (perfective) and "kupovat" (imperfective). Although the verbs' meaning is similar, in perfective verbs the action is completed and in imperfective verbs it is ongoing or repeated. This is distinct from past and present tense. Any verb of either aspect can be conjugated into either the past or present tense, but the future tense is only used with imperfective verbs. Aspect describes the state of the action at the time specified by the tense.
The verbs of most aspect pairs differ in one of two ways: by prefix or by suffix. In prefix pairs, the perfective verb has an added prefix—for example, the imperfective "psát" (to write, to be writing) compared with the perfective "napsat" (to write down). The most common prefixes are "na-", "o-", "po-", "s-", "u-", "vy-", "z-" and "za-". In suffix pairs, a different infinitive ending is added to the perfective stem; for example, the perfective verbs "koupit" (to buy) and "prodat" (to sell) have the imperfective forms "kupovat" and "prodávat". Imperfective verbs may undergo further morphology to make other imperfective verbs (iterative and frequentative forms), denoting repeated or regular action. The verb "jít" (to go) has the iterative form "chodit" (to go regularly) and the frequentative form "chodívat" (to go occasionally; to tend to go).
Many verbs have only one aspect, and verbs describing continual states of being—"být" (to be), "chtít" (to want), "moct" (to be able to), "ležet" (to lie down, to be lying down)—have no perfective form. Conversely, verbs describing immediate states of change—for example, "otěhotnět" (to become pregnant) and "nadchnout se" (to become enthusiastic)—have no imperfective aspect.
Tense.
The present tense in Czech is formed by adding an ending that agrees with the person and number of the subject at the end of the verb stem. As Czech is a null-subject language, the subject pronoun can be omitted unless it is needed for clarity. The past tense is formed using a participle which ends in "-l" and a further ending which agrees with the gender and number of the subject. For the first and second persons, the auxiliary verb "být" conjugated in the present tense is added.
In some contexts, the present tense of perfective verbs (which differs from the English present perfect) implies future action; in others, it connotes habitual action. The perfective present is used to refer to completion of actions in the future and is distinguished from the imperfective future tense, which refers to actions that will be ongoing in the future. The future tense is regularly formed using the future conjugation of "být" (as shown in the table on the left) and the infinitive of an imperfective verb, for example, "budu jíst"—"I will eat" or "I will be eating". Where "budu" has a noun or adjective complement it means "I will be", for example, "budu šťastný" (I will be happy). Some verbs of movement form their future tense by adding the prefix "po-" to the present tense forms instead, e.g. "jedu" ("I go") > "pojedu" ("I will go").
Mood.
Czech verbs have three grammatical moods: indicative, imperative and conditional. The imperative mood is formed by adding specific endings for each of three person–number categories: "-Ø/-i/-ej" for second-person singular, "-te/-ete/-ejte" for second-person plural and "-me/-eme/-ejme" for first-person plural. Imperatives are usually expressed using perfective verbs if positive and imperfective verbs if negative. The conditional mood is formed with a conditional auxiliary verb after the participle ending in -l which is used to form the past tense. This mood indicates hypothetical events and can also be used to express wishes.
Verb classes.
Most Czech verbs fall into one of five classes, which determine their conjugation patterns. The future tense of "být" would be classified as a Class I verb because of its endings. Examples of the present tense of each class and some common irregular verbs follow in the tables below:
Orthography.
Czech has one of the most phonemic orthographies of all European languages. Its alphabet contains 42 graphemes, most of which correspond to individual phonemes, and only contains only one digraph: "ch", which follows "h" in the alphabet. The characters "q", "w" and "x" appear only in foreign words. The háček (ˇ) is used with certain letters to form new characters: "š", "ž", and "č", as well as "ň", "ě", "ř", "ť", and "ď" (the latter five uncommon outside Czech). The last two letters are sometimes written with a comma above (ʼ, an abbreviated háček) because of their height. Czech orthography has influenced the orthographies of other Balto-Slavic languages and some of its characters have been adopted for transliteration of Cyrillic.
Czech orthography reflects vowel length; long vowels are indicated by an acute accent or, in the case of the character "ů", a ring. Long "u" is usually written "ú" at the beginning of a word or morpheme ("úroda", "neúrodný") and "ů" in the middle, except for loanwords ("skútr") or onomatopoeia ("bú"). Long vowels and "ě" are not considered separate letters in the alphabetical order. The character "ó" exists only in loanwords and onomatopoeia.
Czech typographical features not associated with phonetics generally resemble those of most European languages that use the Latin script, including English. Proper nouns, honorifics, and the first letters of quotations are capitalized, and punctuation is typical of other Latin European languages. Ordinal numbers (1st) use a point, as in German (1.). The Czech language uses a decimal comma instead of a decimal point. When writing a long number, spaces between every three digits, including those in decimal places, may be used for better orientation in handwritten texts. The number 1,234,567.89101 may be written as 1234567,89101 or 1 234 567,891 01. In proper noun phrases (except personal and settlement names), only the first word and proper nouns inside such phrases are capitalized ("Pražský hrad", Prague Castle).
Varieties.
The modern literary standard and prestige variety, known as "Standard Czech" () is based on the standardization during the Czech National Revival in the 1830s, significantly influenced by Josef Jungmann's Czech–German dictionary published during 1834–1839. Jungmann used vocabulary of the Bible of Kralice (1579–1613) period and of the language used by his contemporaries. He borrowed words not present in Czech from other Slavic languages or created neologisms. Standard Czech is the formal register of the language which is used in official documents, formal literature, newspaper articles, education and occasionally public speeches. It is codified by the Czech Language Institute, who publish occasional reforms to the codification. The most recent reform took place in 1993. The term () is sometimes used to refer to the spoken variety of standard Czech.
The most widely spoken vernacular form of the language is called "Common Czech" (), an interdialect influenced by spoken Standard Czech and the Central Bohemian dialects of the Prague region. Other Bohemian regional dialects have become marginalized, while Moravian dialects remain more widespread and diverse, with a political movement for Moravian linguistic revival active since the 1990s.
These varieties of the language (Standard Czech, spoken/colloquial Standard Czech, Common Czech, and regional dialects) form a stylistic continuum, in which contact between varieties of a similar prestige influences change within them.
Common Czech.
The main Czech vernacular, spoken primarily in Bohemia including the capital Prague, is known as Common Czech ("obecná čeština"). This is an academic distinction; most Czechs are unaware of the term or associate it with deformed or "incorrect" Czech. Compared to Standard Czech, Common Czech is characterized by simpler inflection patterns and differences in sound distribution.
Common Czech is distinguished from spoken/colloquial Standard Czech (), which is a stylistic variety within standard Czech. Tomasz Kamusella defines the spoken variety of Standard Czech as a compromise between Common Czech and the written standard, while Miroslav Komárek calls Common Czech an intersection of spoken Standard Czech and regional dialects.
Common Czech has become ubiquitous in most parts of the Czech Republic since the later 20th century. It is usually defined as an interdialect used in common speech in Bohemia and western parts of Moravia (by about two thirds of all inhabitants of the Czech Republic). Common Czech is not codified, but some of its elements have become adopted in the written standard. Since the second half of the 20th century, Common Czech elements have also been spreading to regions previously unaffected, as a consequence of media influence. Standard Czech is still the norm for politicians, businesspeople and other Czechs in formal situations, but Common Czech is gaining ground in journalism and the mass media. The colloquial form of Standard Czech finds limited use in daily communication due to the expansion of the Common Czech interdialect. It is sometimes defined as a theoretical construct rather than an actual tool of colloquial communication, since in casual contexts, the non-standard interdialect is preferred.
Common Czech phonology is based on that of the Central Bohemian dialect group, which has a slightly different set of vowel phonemes to Standard Czech. The phoneme /ɛː/ is peripheral and usually merges with /iː/, e.g. in "malý město" (small town), "plamínek" (little flame) and "lítat" (to fly), and a second native diphthong /ɛɪ̯/ occurs, usually in places where Standard Czech has /iː/, e.g. "malej dům" (small house), "mlejn" (mill), "plejtvat" (to waste), "bejt" (to be). In addition, a prothetic "v-" is added to most words beginning "o-", such as "votevřít vokno" (to open the window).
Non-standard morphological features that are more or less common among all Common Czech speakers include:
Examples of declension (Standard Czech is added in italics for comparison):
"mladý člověk – young man/person, mladí lidé – young people, mladý stát – young state, mladá žena – young woman, mladé zvíře – young animal"
Bohemian dialects.
Apart from the Common Czech vernacular, there remain a variety of other Bohemian dialects, mostly in marginal rural areas. Dialect use began to weaken in the second half of the 20th century, and by the early 1990s regional dialect use was stigmatized, associated with the shrinking lower class and used in literature or other media for comedic effect. Increased travel and media availability to dialect-speaking populations has encouraged them to shift to (or add to their own dialect) Standard Czech.
The Czech Statistical Office in 2003 recognized the following Bohemian dialects:
*"Podskupina chodská" (Chod subgroup)
*"Podskupina doudlebská" (Doudleby subgroup)
*"Podskupina podkrknošská" (Krkonoše subgroup)
Moravian dialects.
The Czech dialects spoken in Moravia and Silesia are known as Moravian ("moravština"). In the Austro-Hungarian Empire, "Bohemian-Moravian-Slovak" was a language citizens could register as speaking (with German, Polish and several others). In the 2011 census, where respondents could optionally specify up to two first languages, 62,908 Czech citizens specified Moravian as their first language and 45,561 specified both Moravian and Czech.
Beginning in the sixteenth century, some varieties of Czech resembled Slovak; the southeastern Moravian dialects form a continuum between the Czech and Slovak languages, using the same declension patterns for nouns and pronouns and the same verb conjugations as Slovak.
A popular misconception holds that eastern Moravian dialects are closer to Slovak than Czech, but this is incorrect; in fact, the opposite is true, and certain dialects in far western Slovakia exhibit features more akin to standard Czech than to standard Slovak.
The Czech Statistical Office in 2003 recognized the following Moravian dialects:
*"Podskupina tišnovská" (Tišnov subgroup)
*"Podskupina slovácká" (Moravian Slovak subgroup)
*"Podskupina valašská" (Moravian Wallachian subgroup)
Sample.
In a 1964 textbook on Czech dialectology, Břetislav Koudela used the following sentence to highlight phonetic differences between dialects:
Mutual intelligibility with Slovak.
Czech and Slovak have been considered mutually intelligible; speakers of either language can communicate with greater ease than those of any other pair of West Slavic languages. Following the 1993 dissolution of Czechoslovakia, mutual intelligibility declined for younger speakers, probably because Czech speakers began to experience less exposure to Slovak and vice versa. A 2015 study involving participants with a mean age of around 23 nonetheless concluded that there remained a high degree of mutual intelligibility between the two languages. Grammatically, both languages share a common syntax.
One study showed that Czech and Slovak lexicons differed by 80 percent, but this high percentage was found to stem primarily from differing orthographies and slight inconsistencies in morphological formation; Slovak morphology is more regular (when changing from the nominative to the locative case, "Praha" becomes "Praze" in Czech and "Prahe" in Slovak). The two lexicons are generally considered similar, with most differences found in colloquial vocabulary and some scientific terminology. Slovak has slightly more borrowed words than Czech.
The similarities between Czech and Slovak led to the languages being considered a single language by a group of 19th-century scholars who called themselves "Czechoslavs" ("Čechoslované"), believing that the peoples were connected in a way which excluded German Bohemians and (to a lesser extent) Hungarians and other Slavs. During the First Czechoslovak Republic (1918–1938), although "Czechoslovak" was designated as the republic's official language, both Czech and Slovak written standards were used. Standard written Slovak was partially modeled on literary Czech, and Czech was preferred for some official functions in the Slovak half of the republic. Czech influence on Slovak was protested by Slovak scholars, and when Slovakia broke off from Czechoslovakia in 1938 as the Slovak State (which then aligned with Nazi Germany in World War II), literary Slovak was deliberately distanced from Czech. When the Axis powers lost the war and Czechoslovakia reformed, Slovak developed somewhat on its own (with Czech influence); during the Prague Spring of 1968, Slovak gained independence from (and equality with) Czech, due to the transformation of Czechoslovakia from a unitary state to a federation. Since the dissolution of Czechoslovakia in 1993, "Czechoslovak" has referred to improvised pidgins of the languages which have arisen from the decrease in mutual intelligibility.
Vocabulary.
Czech vocabulary derives primarily from Slavic, Baltic and other Indo-European roots. Although most verbs have Balto-Slavic origins, pronouns, prepositions and some verbs have wider, Indo-European roots. Some loanwords have been restructured by folk etymology to resemble native Czech words (e.g. "hřbitov", "graveyard" and "listina", "list").
Most Czech loanwords originated in one of two time periods. Earlier loanwords, primarily from German, Greek and Latin, arrived before the Czech National Revival. More recent loanwords derive primarily from English and French, and also from Hebrew, Arabic and Persian. Many Russian loanwords, principally animal names and naval terms, also exist in Czech.
Although older German loanwords were colloquial, recent borrowings from other languages are associated with high culture. During the nineteenth century, words with Greek and Latin roots were rejected in favor of those based on older Czech words and common Slavic roots; "music" is "muzyka" in Polish and "музыка" ("muzyka") in Russian, but in Czech it is "hudba". Some Czech words have been borrowed as loanwords into English and other languages—for example, "robot" (from "robota", "labor") and "polka" (from "polka", "Polish woman" or from "půlka" "half").
Example text.
Article 1 of the "Universal Declaration of Human Rights" in Czech:
"Všichni lidé rodí se svobodní a sobě rovní co do důstojnosti a práv. Jsou nadáni rozumem a svědomím a mají spolu jednat v duchu bratrství."
Article 1 of the "Universal Declaration of Human Rights" in English:
"All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood."
|
6344
|
45202605
|
https://en.wikipedia.org/wiki?curid=6344
|
Capsid
|
A capsid is the protein shell of a virus, enclosing its genetic material. It consists of several oligomeric (repeating) structural subunits made of protein called protomers. The observable 3-dimensional morphological subunits, which may or may not correspond to individual proteins, are called capsomeres. The proteins making up the capsid are called capsid proteins or viral coat proteins (VCP). The virus genomic component inside the capsid, along with occasionally present virus core protein, is called the virus core. The capsid and core together are referred to as a nucleocapsid (cf. also virion).
Capsids are broadly classified according to their structure. The majority of the viruses have capsids with either helical or icosahedral structure. Some viruses, such as bacteriophages, have developed more complicated structures due to constraints of elasticity and electrostatics. The icosahedral shape, which has 20 equilateral triangular faces, approximates a sphere, while the helical shape resembles the shape of a spring, taking the space of a cylinder but not being a cylinder itself. The capsid faces may consist of one or more proteins. For example, the foot-and-mouth disease virus capsid has faces consisting of three proteins named VP1–3.
Some viruses are "enveloped", meaning that the capsid is coated with a lipid membrane known as the viral envelope. The envelope is acquired by the capsid from an intracellular membrane in the virus' host; examples include the inner nuclear membrane, the Golgi membrane, and the cell's outer membrane.
Once the virus has infected a cell and begins replicating itself, new capsid subunits are synthesized using the protein biosynthesis mechanism of the cell. In some viruses, including those with helical capsids and especially those with RNA genomes, the capsid proteins co-assemble with their genomes. In other viruses, especially more complex viruses with double-stranded DNA genomes, the capsid proteins assemble into empty precursor procapsids that include a specialized portal structure at one vertex. Through this portal, viral DNA is translocated into the capsid.
Structural analyses of major capsid protein (MCP) architectures have been used to categorise viruses into lineages. For example, the bacteriophage PRD1, the algal virus "Paramecium bursaria Chlorella virus-1" (PBCV-1), mimivirus and the mammalian adenovirus have been placed in the same lineage, whereas tailed, double-stranded DNA bacteriophages ("Caudovirales") and herpesvirus belong to a second lineage.
Specific shapes.
Icosahedral.
The icosahedral structure is extremely common among viruses. The icosahedron consists of 20 triangular faces delimited by 12 fivefold vertexes and consists of 60 asymmetric units. Thus, an icosahedral virus is made of 60N protein subunits. The number and arrangement of capsomeres in an icosahedral capsid can be classified using the "quasi-equivalence principle" proposed by Donald Caspar and Aaron Klug. Like the Goldberg polyhedra, an icosahedral structure can be regarded as being constructed from pentamers and hexamers. The structures can be indexed by two integers "h" and "k", with formula_1 and formula_2; the structure can be thought of as taking "h" steps from the edge of a pentamer, turning 60 degrees counterclockwise, then taking "k" steps to get to the next pentamer. The triangulation number "T" for the capsid is defined as:
formula_3
In this scheme, icosahedral capsids contain 12 pentamers plus 10("T" − 1) hexamers. The "T"-number is representative of the size and complexity of the capsids. Geometric examples for many values of "h", "k", and "T" can be found at List of geodesic polyhedra and Goldberg polyhedra.
Many exceptions to this rule exist: For example, the polyomaviruses and papillomaviruses have pentamers instead of hexamers in hexavalent positions on a quasi T = 7 lattice. Members of the double-stranded RNA virus lineage, including reovirus, rotavirus and bacteriophage φ6 have capsids built of 120 copies of capsid protein, corresponding to a T = 2 capsid, or arguably a T = 1 capsid with a dimer in the asymmetric unit. Similarly, many small viruses have a pseudo T = 3 (or P = 3) capsid, which is organized according to a T = 3 lattice, but with distinct polypeptides occupying the three quasi-equivalent positions
Prolate.
An elongated icosahedron is a common shape for the heads of bacteriophages. Such a structure is composed of a cylinder with a cap at either end. The cylinder is composed of 10 elongated triangular faces. The Q number (or Tmid), which can be any positive integer, specifies the number of triangles, composed of asymmetric subunits, that make up the 10 triangles of the cylinder. The caps are classified by the T (or Tend) number.
The bacterium "E. coli" is the host for bacteriophage T4 that has a prolate head structure. The bacteriophage encoded gp31 protein appears to be functionally homologous to "E. coli" chaperone protein GroES and able to substitute for it in the assembly of bacteriophage T4 virions during infection. Like GroES, gp31 forms a stable complex with GroEL chaperonin that is absolutely necessary for the folding and assembly "in vivo" of the bacteriophage T4 major capsid protein gp23.
Helical.
Many rod-shaped and filamentous plant viruses have capsids with helical symmetry. The helical structure can be described as a set of "n" 1-D molecular helices related by an "n"-fold axial symmetry. The helical transformation are classified into two categories: one-dimensional and two-dimensional helical systems. Creating an entire helical structure relies on a set of translational and rotational matrices which are coded in the protein data bank. Helical symmetry is given by the formula "P" = "μ" x "ρ", where "μ" is the number of structural units per turn of the helix, "ρ" is the axial rise per unit and "P" is the pitch of the helix. The structure is said to be open due to the characteristic that any volume can be enclosed by varying the length of the helix. The most understood helical virus is the tobacco mosaic virus. The virus is a single molecule of (+) strand RNA. Each coat protein on the interior of the helix binds three nucleotides of the RNA genome. Influenza A viruses differ by comprising multiple ribonucleoproteins, the viral NP protein organizes the RNA into a helical structure. The size is also different; the tobacco mosaic virus has a 16.33 protein subunits per helical turn, while the influenza A virus has a 28 amino acid tail loop.
Functions.
The functions of the capsid are to:
The virus must assemble a stable, protective protein shell to protect the genome from lethal chemical and physical agents. These include extremes of pH or temperature and proteolytic and nucleolytic enzymes. For non-enveloped viruses, the capsid itself may be involved in interaction with receptors on the host cell, leading to penetration of the host cell membrane and internalization of the capsid. Delivery of the genome occurs by subsequent uncoating or disassembly of the capsid and release of the genome into the cytoplasm, or by ejection of the genome through a specialized portal structure directly into the host cell nucleus.
Origin and evolution.
It has been suggested that many viral capsid proteins have evolved on multiple occasions from functionally diverse cellular proteins. The recruitment of cellular proteins appears to have occurred at different stages of evolution so that some cellular proteins were captured and refunctionalized prior to the divergence of cellular organisms into the three contemporary domains of life, whereas others were hijacked relatively recently. As a result, some capsid proteins are widespread in viruses infecting distantly related organisms (e.g., capsid proteins with the jelly-roll fold), whereas others are restricted to a particular group of viruses (e.g., capsid proteins of alphaviruses).
A computational model (2015) has shown that capsids may have originated before viruses and that they served as a means of horizontal transfer between replicator communities since these communities could not survive if the number of gene parasites increased, with certain genes being responsible for the formation of these structures and those that favored the survival of self-replicating communities. The displacement of these ancestral genes between cellular organisms could favor the appearance of new viruses during evolution.
|
6346
|
27823944
|
https://en.wikipedia.org/wiki?curid=6346
|
Chloramphenicol
|
Chloramphenicol is an antibiotic useful for the treatment of a number of bacterial infections. This includes use as an eye ointment to treat conjunctivitis. By mouth or by injection into a vein, it is used to treat meningitis, plague, cholera, and typhoid fever. Its use by mouth or by injection is only recommended when safer antibiotics cannot be used. Monitoring both blood levels of the medication and blood cell levels every two days is recommended during treatment.
Common side effects include bone marrow suppression, nausea, and diarrhea. The bone marrow suppression may result in death. To reduce the risk of side effects treatment duration should be as short as possible. People with liver or kidney problems may need lower doses. In young infants, a condition known as gray baby syndrome may occur which results in a swollen stomach and low blood pressure. Its use near the end of pregnancy and during breastfeeding is typically not recommended. Chloramphenicol is a broad-spectrum antibiotic that typically stops bacterial growth by stopping the production of proteins.
Chloramphenicol was discovered after being isolated from "Streptomyces venezuelae" in 1947. Its chemical structure was identified and it was first synthesized in 1949. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses.
The original indication of chloramphenicol was in the treatment of typhoid, but the presence of multiple drug-resistant "Salmonella typhi" has meant it is seldom used for this indication except when the organism is known to be sensitive.
In low-income countries, the WHO no longer recommends only chloramphenicol as first-line to treat meningitis, but recognises it may be used with caution if there are no available alternatives.
During the last decade chloramphenicol has been re-evaluated as an old agent with potential against systemic infections due to multidrug-resistant gram positive microorganisms (including vancomycin resistant enterococci). "In vitro" data have shown an activity against the majority (> 80%) of vancomycin resistant "E. faecium" strains.
In the context of preventing endophthalmitis, a complication of cataract surgery, a 2017 systematic review found moderate evidence that using chloramphenicol eye drops in addition to an antibiotic injection (cefuroxime or penicillin) will likely lower the risk of endophthalmitis, compared to eye drops or antibiotic injections alone.
Spectrum.
Chloramphenicol has a broad spectrum of activity and has been effective in treating ocular infections such as conjunctivitis, blepharitis etc. caused by a number of bacteria including "Staphylococcus aureus, Streptococcus pneumoniae", and Escherichia coli. It is not effective against "Pseudomonas aeruginosa". The following susceptibility data represent the minimum inhibitory concentration for a few medically significant organisms.
Each of these concentrations is dependent upon the bacterial strain being targeted. Some strains of E coli, for example, show spontaneous emergence of chloramphenicol resistance.
Resistance.
Three mechanisms of resistance to chloramphenicol are known: reduced membrane permeability, mutation of the 50S ribosomal subunit, and elaboration of chloramphenicol acetyltransferase. It is easy to select for reduced membrane permeability to chloramphenicol "in vitro" by serial passage of bacteria, and this is the most common mechanism of low-level chloramphenicol resistance. High-level resistance is conferred by the "cat"-gene; this gene codes for an enzyme called chloramphenicol acetyltransferase, which inactivates chloramphenicol by covalently linking one or two acetyl groups, derived from acetyl-"S"-coenzyme A, to the hydroxyl groups on the chloramphenicol molecule. The acetylation prevents chloramphenicol from binding to the ribosome. Resistance-conferring mutations of the 50S ribosomal subunit are rare.
Chloramphenicol resistance may be carried on a plasmid that also codes for resistance to other drugs. One example is the ACCoT plasmid (A=ampicillin, C=chloramphenicol, Co=co-trimoxazole, T=tetracycline), which mediates multiple drug resistance in typhoid (also called R factors).
As of 2014 some "Enterococcus faecium" and" Pseudomonas aeruginosa" strains are resistant to chloramphenicol. Some "Veillonella" spp. and "Staphylococcus capitis" strains have also developed resistance to chloramphenicol to varying degrees.
Some other resistance genes beyond "cat" are known, such as chloramphenicol hydrolase, and chloramphenicol phosphotransferase.
Adverse effects.
Aplastic anemia.
The most serious side effect of chloramphenicol treatment is aplastic anaemia ('AA'). This effect is rare but sometimes fatal. The risk of AA is high enough that alternatives should be strongly considered. Treatments are available but expensive. No way exists to predict who may or may not suffer this side effect. The effect usually occurs weeks or months after treatment has been stopped, and a genetic predisposition may be involved. It is not known whether monitoring the blood counts of patients can prevent the development of aplastic anaemia, but patients are recommended to have a baseline blood count with a repeat blood count every few days while on treatment. Chloramphenicol should be discontinued if the complete blood count drops. The highest risk is with oral chloramphenicol (affecting 1 in 24,000–40,000) and the lowest risk occurs with eye drops (affecting less than one in 224,716 prescriptions).
Bone marrow suppression.
Chloramphenicol may cause bone marrow suppression during treatment; this is a direct toxic effect of the drug on human mitochondria. This effect manifests first as a fall in hemoglobin levels, which occurs quite predictably once a cumulative dose of 20 g has been given. The anaemia is fully reversible once the drug is stopped and does not predict future development of aplastic anaemia. Studies in mice have suggested existing marrow damage may compound any marrow damage resulting from the toxic effects of chloramphenicol.
Leukemia.
Leukemia, a cancer of the blood or bone marrow, is characterized by an abnormal increase of immature white blood cells. The risk of childhood leukemia is increased, as demonstrated in a Chinese case–control study, and the risk increases with length of treatment.
Gray baby syndrome.
Intravenous chloramphenicol use has been associated with the so-called gray baby syndrome.
This phenomenon occurs in newborn infants because they do not yet have fully functional liver enzymes (i.e. UDP-glucuronyl transferase), so chloramphenicol remains unmetabolized in the body.
This causes several adverse effects, including hypotension and cyanosis. The condition can be prevented by using the drug at the recommended doses, and monitoring blood levels.
Hypersensitivity reactions.
Fever, macular and vesicular rashes, angioedema, urticaria, and anaphylaxis may occur. Herxheimer's reactions have occurred during therapy for typhoid fever.
Neurotoxic reactions.
Headache, mild depression, mental confusion, and delirium have been described in patients receiving chloramphenicol. Optic and peripheral neuritis have been reported, usually following long-term therapy. If this occurs, the drug should be promptly withdrawn. It is theorized that this is caused by chloramphenicol's effects on the metabolism of B-Vitamins, specifically B-12.
Myelodysplastic Syndrome.
Although rare, Chloramphenicol exposure is associated with some cases of MDS. There is a report of a positive response to immunosuppressive treatment.
Pharmacokinetics.
Chloramphenicol is extremely lipid-soluble; it remains relatively unbound to protein and is a small molecule. It has a large apparent volume of distribution and penetrates effectively into all tissues of the body, including the brain. Distribution is not uniform, with highest concentrations found in the liver and kidney, with lowest in the brain and cerebrospinal fluid. The concentration achieved in brain and cerebrospinal fluid is around 30 to 50% of the overall average body concentration, even when the meninges are not inflamed; this increases to as high as 89% when the meninges are inflamed.
Chloramphenicol increases the absorption of iron.
Use in special populations.
Chloramphenicol is metabolized by the liver to chloramphenicol glucuronate (which is inactive). In liver impairment, the dose of chloramphenicol must therefore be reduced. No standard dose reduction exists for chloramphenicol in liver impairment, and the dose should be adjusted according to measured plasma concentrations.
The majority of the chloramphenicol dose is excreted by the kidneys as the inactive metabolite, chloramphenicol glucuronate. Only a tiny fraction of the chloramphenicol is excreted by the kidneys unchanged. Plasma levels should be monitored in patients with renal impairment, but this is not mandatory. Chloramphenicol succinate ester (an intravenous prodrug form) is readily excreted unchanged by the kidneys, more so than chloramphenicol base, and this is the major reason why levels of chloramphenicol in the blood are much lower when given intravenously than orally.
Dose monitoring.
Plasma levels of chloramphenicol must be monitored in neonates and patients with abnormal liver function. Plasma levels should be monitored in all children under the age of four, the elderly, and patients with kidney failure.
Because efficacy and toxicity of chloramphenicol are associated with a maximum serum concentration, peak levels (one hour after the intravenous dose is given) should be 10–20 μg/mL with toxicity ; trough levels (taken immediately before a dose) should be 5–10 μg/mL.
Drug interactions.
Administration of chloramphenicol concomitantly with bone marrow depressant drugs is contraindicated, although concerns over aplastic anaemia associated with ocular chloramphenicol have largely been discounted.
Chloramphenicol is a potent inhibitor of the cytochrome P450 isoforms CYP2C19 and CYP3A4 in the liver. Inhibition of CYP2C19 causes decreased metabolism and therefore increased levels of, for example, antidepressants, antiepileptics, proton-pump inhibitors, and anticoagulants if they are given concomitantly. Inhibition of CYP3A4 causes increased levels of, for example, calcium channel blockers, immunosuppressants, chemotherapeutic drugs, benzodiazepines, azole antifungals, tricyclic antidepressants, macrolide antibiotics, SSRIs, statins, cardiac antiarrhythmics, antivirals, anticoagulants, and PDE5 inhibitors.
Drug antagonistic.
Chloramphenicol is antagonistic with most cephalosporins and using both together should be avoided in the treatment of infections.
Drug synergism.
Chloramphenicol has been demonstrated a synergistic effect when combined with fosfomycin against clinical isolates of "Enterococcus faecium".
Mechanism of action.
Chloramphenicol is a bacteriostatic agent, inhibiting protein synthesis. It prevents protein chain elongation by inhibiting the peptidyl transferase activity of the bacterial ribosome. It specifically binds to A2451 and A2452 residues in the 23S rRNA of the 50S ribosomal subunit, preventing peptide bond formation. Chloramphenicol directly interferes with substrate binding in the ribosome, as compared to macrolides, which sterically block the progression of the growing peptide.
History.
Chloramphenicol was first isolated from "Streptomyces venezuelae" in 1947 and in 1949 a team of scientists at Parke-Davis including Mildred Rebstock published their identification of the chemical structure and their synthesis.
In 1972, Senator Ted Kennedy combined the two examples of the Tuskegee Syphilis Study and the 1958 Los Angeles Infant Chloramphenicol experiments as initial subjects of a Senate Subcommittee investigation into dangerous medical experimentation on human subjects.
In 2007, the accumulation of reports associating aplastic anemia and blood dyscrasia with chloramphenicol eye drops led to the classification of "probable human carcinogen" according to World Health Organization criteria, based on the known published case reports and the spontaneous reports submitted to the National Registry of Drug-Induced Ocular Side Effects.
Society and culture.
Names.
Chloramphenicol is available as a generic worldwide under many brandnames and also under various generic names in eastern Europe and Russia, including chlornitromycin, levomycetin, and chloromycetin; the racemate is known as synthomycetin.
Formulations.
Chloramphenicol is available as a capsule or as a liquid. In some countries, it is sold as chloramphenicol palmitate ester (CPE). CPE is inactive, and is hydrolysed to active chloramphenicol in the small intestine. No difference in bioavailability is noted between chloramphenicol and CPE.
Manufacture of oral chloramphenicol in the U.S. stopped in 1991, because the vast majority of chloramphenicol-associated cases of aplastic anaemia are associated with the oral preparation. No oral formulation of chloramphenicol is available in the U.S. for human use.
Intravenous.
The intravenous (IV) preparation of chloramphenicol is the succinate ester. This creates a problem: Chloramphenicol succinate ester is an inactive prodrug and must first be hydrolysed to chloramphenicol; however, the hydrolysis process is often incomplete, and 30% of the dose is lost and removed in the urine. Serum concentrations of IV chloramphenicol are only 70% of those achieved when chloramphenicol is given orally. For this reason, the dose needs to be increased to 75 mg/kg/day when administered IV to achieve levels equivalent to the oral dose.
Oily.
Oily chloramphenicol (or chloramphenicol oil suspension) is a long-acting preparation of chloramphenicol first introduced by Roussel in 1954; marketed as Tifomycine, it was originally used as a treatment for typhoid. Roussel stopped production of oily chloramphenicol in 1995; the International Dispensary Association Foundation has manufactured it since 1998, first in Malta and then in India from December 2004.
Oily chloramphenicol was first used to treat meningitis in 1975 and numerous studies since have demonstrated its efficacy. It is the cheapest treatment available for meningitis (US$5 per treatment course, compared to US$30 for ampicillin and US$15 for five days of ceftriaxone). It has the great advantage of requiring only a single injection, whereas ceftriaxone is traditionally given daily for five days. This recommendation may yet change, now that a single dose of ceftriaxone (cost US$3) has been shown to be equivalent to one dose of oily chloramphenicol.
Eye drops.
Chloramphenicol is used in topical preparations (ointments and eye drops) for the treatment of bacterial conjunctivitis. Isolated case reports of aplastic anaemia following use of chloramphenicol eyedrops exist, but the risk is estimated to be of the order of less than one in 224,716 prescriptions. In Mexico, this is the treatment used prophylactically in newborns for neonatal conjunctivitis.
Veterinary uses.
Although its use in veterinary medicine is highly restricted, chloramphenicol still has some important veterinary uses. It is currently considered the most useful treatment of chlamydial disease in koalas. The pharmacokinetics of chloramphenicol have been investigated in koalas.
Biosynthesis.
The biosynthetic gene cluster and pathway for chloroamphenicol was characterized from "Streptomyces venezuelae" ISP5230 (ATCC 17102). Currently the chloramphenicol biosynthetic gene cluster has 17 genes with assigned roles.
Plasmid preparation.
Chloramphenicol is often used when growing "E. coli" cultures intended for plasmid preparation. Chloramphenicol halts protein synthesis, but allows plasmids with a relaxed origin of replication to keep replicating, thus improving yield.
|
6347
|
1298338719
|
https://en.wikipedia.org/wiki?curid=6347
|
Cut-up technique
|
The cut-up technique (or "découpé" in French) is an aleatory narrative technique in which a written text is cut up and rearranged to create a new text. The concept can be traced to the Dadaists of the 1920s, but it was developed and popularized in the 1950s and early 1960s, especially by writer William Burroughs. It has since been used in a wide variety of contexts.
Technique.
The cut-up and the closely associated fold-in are the two main techniques:
William Burroughs cited T. S. Eliot's 1922 poem, "The Waste Land", and John Dos Passos' "U.S.A." trilogy, which incorporated newspaper clippings, as early examples of the cut ups he popularized.
Gysin introduced Burroughs to the technique at the Beat Hotel. The pair later applied the technique to printed media and audio recordings in an effort to decode the material's implicit content, hypothesizing that such a technique could be used to discover the true meaning of a given text. Burroughs also suggested cut-ups may be effective as a form of divination saying, "When you cut into the present the future leaks out." Burroughs also further developed the "fold-in" technique. In 1977, Burroughs and Gysin published "The Third Mind", a collection of cut-up writings and essays on the form. Jeff Nuttall's publication "My Own Mag" was another important outlet for the then-radical technique.
In an interview, Alan Burns noted that for "Europe After The Rain" (1965) and subsequent novels he used a version of cut-ups: "I did not actually use scissors, but I folded pages, read across columns, and so on, discovering for myself many of the techniques Burroughs and Gysin describe."
History.
In literature.
A precedent of the technique occurred during a Dadaist rally in the 1920s in which Tristan Tzara offered to create a poem on the spot by pulling words at random from a hat. Collage, which was popularized roughly contemporaneously with the Surrealist movement, sometimes incorporated texts such as newspapers or brochures. Prior to this event, the technique had been published in an issue of 391 in the poem by Tzara, "dada manifesto on feeble love and bitter love" under the sub-title, "TO MAKE A DADAIST POEM".
In the 1950s, painter and writer Brion Gysin more fully developed the cut-up method after accidentally rediscovering it. He had placed layers of newspapers as a mat to protect a tabletop from being scratched while he cut papers with a razor blade. Upon cutting through the newspapers, Gysin noticed that the sliced layers offered interesting juxtapositions of text and image. He began deliberately cutting newspaper articles into sections, which he randomly rearranged. The book "Minutes to Go" resulted from his initial cut-up experiment: unedited and unchanged cut-ups which emerged as coherent and meaningful prose. South African poet Sinclair Beiles also used this technique and co-authored "Minutes To Go".
Argentine writer Julio Cortázar used cut ups in his 1963 novel "Hopscotch".
In 1969, poets Howard W. Bergerson and J. A. Lindon developed a cut-up technique known as vocabularyclept poetry, in which a poem is formed by taking all the words of an existing poem and rearranging them, often preserving the metre and stanza lengths.
A drama scripted for five voices by performance poet Hedwig Gorski in 1977 originated the idea of creating poetry only for performance instead of for print publication. The "neo-verse drama" titled "Booby, Mama!" written for "guerilla theater" performances in public places used a combination of newspaper cut-ups that were edited and choreographed for a troupe of non-professional street actors.
Kathy Acker used cut-ups in some of her works, including the novel "Blood and Guts in High School".
In film.
Antony Balch and Burroughs created a collaboration film, "The Cut-Ups" that opened in London in 1967. This was part of an abandoned project called "Guerrilla Conditions" meant as a documentary on Burroughs and filmed throughout 1961–1965. Inspired by Burroughs' and Gysin's technique of cutting up text and rearranging it in random order, Balch had an editor cut his footage for the documentary into little pieces and impose no control over its reassembly. The film opened at Oxford Street's Cinephone cinema and had a disturbing reaction. Many audience members claimed the film made them ill, others demanded their money back, while some just stumbled out of the cinema ranting "it's disgusting". Other cut-up films include "Ghost at n°9 (Paris)" (1963–1972), a posthumously released short film compiled from reels found at Balch's office after his death, and "William Buys a Parrott" (1982), "Bill and Tony" (1972), "Towers Open Fire" (1963) and "The Junky's Christmas" (1966).
In music.
In 1962, the satirical comedy group Bonzo Dog Doo-Dah Band, got their name after using the cut-up technique, resulting in "Bonzo Dog Dada": "Bonzo Dog", after the cartoon Bonzo the Dog, and "Dada" after the Dada avant-garde art movement. The group's eventual frontman, Vivian Stanshall, would quote about wanting to form a band with that name. The "Dada" in the phrase was eventually changed to "Doo-Dah".
From the early 1970s, David Bowie used cut-ups to create some of his lyrics. In 1995, he worked with Ty Roberts to develop a program called "Verbasizer" for his Apple PowerBook that could automatically rearrange multiple sentences written into it. Thom Yorke applied a similar method in Radiohead's "Kid A" (2000) album, writing single lines, putting them into a hat, and drawing them out at random while the band rehearsed the songs. Perhaps indicative of Thom Yorke's influences, instructions for "How to make a Dada poem" appeared on Radiohead's website at this time.
Stephen Mallinder of Cabaret Voltaire reported to "Inpress" magazine's Andrez Bergen that "I do think the manipulation of sound in our early days – the physical act of cutting up tapes, creating tape loops and all that – has a strong reference to Burroughs and Gysin." Another industrial music pioneer, Al Jourgensen of Ministry, named Burroughs and his cut-up technique as the most important influence on how he approached the use of samples.
Many Elephant 6 bands used decoupe as well, one prominent example of this is seen in "Pree-Sisters Swallowing A Donkey's Eye" by Neutral Milk Hotel.
|
6352
|
22986354
|
https://en.wikipedia.org/wiki?curid=6352
|
Congenital iodine deficiency syndrome
|
Congenital iodine deficiency syndrome (CIDS), also called cretinism, is a medical condition present at birth marked by impaired physical and mental development, due to insufficient thyroid hormone production (hypothyroidism) often caused by insufficient dietary iodine during pregnancy. It is one cause of underactive thyroid function at birth, called congenital hypothyroidism. If untreated, it results in impairment of both physical and mental development. Symptoms may include: goiter, poor length growth in infants, reduced adult stature, thickened skin, hair loss, enlarged tongue, a protruding abdomen, delayed bone maturation and puberty in children, mental deterioration, neurological impairment, impeded ovulation, and infertility in adults.
In developed countries, thyroid function testing of newborns has assured that in those affected, treatment with the synthetic thyroid hormone thyroxine is begun promptly. This screening and treatment successfully cures the disease.
Signs and symptoms.
Iodine deficiency causes gradual enlargement of the thyroid gland, referred to as a goiter. Poor length growth is apparent as early as the first year of life. Adult stature without treatment ranges from , depending on severity, sex, and other genetic factors. Other signs include thickened skin, hair loss, enlarged tongue, and a protruding abdomen. In children, bone maturation and puberty are severely delayed. In adults, ovulation is impeded and infertility is common.
Mental deterioration is common. Neurological impairment may be mild, with reduced muscle tone and motor coordination, or so severe that the person cannot stand or walk. Cognitive impairment may also range from mild to so severe that the person is nonverbal and dependent on others for basic care. Thought and reflexes are slower.
Cause.
Around the world, the most common cause of congenital iodine deficiency syndrome (endemic cretinism) is dietary iodine deficiency.
Iodine is an essential trace element, necessary for the synthesis of thyroid hormones. Iodine deficiency is the most common preventable cause of neonatal and childhood brain damage worldwide. Although iodine is found in many foods, it is not universally present in all soils in adequate amounts. Most iodine, in iodide form, is in the oceans, where the iodide ions are reduced to elemental iodine, which then enters the atmosphere and falls to earth in rain, introducing iodine to soils. Soil deficient in iodine is most common inland, in mountainous areas, and in areas of frequent flooding. It can also occur in coastal regions, where iodine might have been removed from the soil by glaciation, as well as leaching by snow, water and heavy rainfall. Plants and animals grown in iodine-deficient soils are correspondingly deficient. Populations living in those areas without outside food sources are most at risk of iodine deficiency diseases.
Diagnosis.
Differential diagnosis.
Dwarfism may also be caused by malnutrition or other hormonal deficiencies, such as insufficient growth hormone secretion, hypopituitarism, decreased secretion of growth hormone-releasing hormone, deficient growth hormone receptor activity and downstream causes, such as insulin-like growth factor 1 (IGF-1) deficiency.
Prevention.
There are public health campaigns in many countries which involve iodine administration. As of December 2019, 122 countries have mandatory iodine food fortification programs.
Treatment.
Congenital iodine deficiency has been almost eliminated in developed countries through iodine supplementation of food and by newborn screening using a blood test for thyroid function.
Treatment consists of lifelong administration of thyroxine (T4). Thyroxine must be dosed as tablets only, even to newborns, as the liquid oral suspensions and compounded forms cannot be depended on for reliable dosing. For infants, the T4 tablets are generally crushed and mixed with breast milk, formula milk or water. If the medication is mixed with formulas containing iron or soya products, larger doses may be required, as these substances may alter the absorption of thyroid hormone from the gut. Monitoring TSH blood levels every 2–3 weeks during the first months of life is recommended to ensure that affected infants are at the high end of normal range.
History.
A goiter is the most specific clinical marker of either the direct or indirect insufficient intake of iodine in the human body. There is evidence of goiter, and its medical treatment with iodine-rich algae and burnt sponges, in Chinese, Egyptian, and Roman ancient medical texts. In 1848, King Carlo Alberto of Sardinia commissioned the first epidemiological study of congenital iodine deficiency syndrome, in northern Savoy where it was frequent. In past centuries, the well reported social diseases prevalent among the poorer social classes and farmers, caused by dietary and agricultural monocultures, were: pellagra, rickets, beriberi, scurvy in long-term sailors, and the endemic goiter caused by iodine deficiency. However, this disease was less mentioned in medical books because it was erroneously considered to be an aesthetic rather than a clinical disorder.
Congenital iodine-deficiency syndrome was especially common in areas of southern Europe around the Alps and was often described by ancient Roman writers and depicted by artists. The earliest Alpine mountain climbers sometimes came upon whole villages affected by it. The prevalence of the condition was described from a medical perspective by several travellers and physicians in the late 18th and early 19th centuries. At that time the cause was not known and it was often attributed to "stagnant air" in mountain valleys or "bad water". The proportion of people affected varied markedly throughout southern Europe and even within very small areas; it might be common in one valley and not another. The number of severely affected persons was always a minority, and most persons were only affected to the extent of having a goitre and some degree of reduced cognition and growth. The majority of such cases were still socially functional in their pastoral villages.
More mildly affected areas of Europe and North America in the 19th century were referred to as "goitre belts". The degree of iodine deficiency was milder and manifested primarily as thyroid enlargement rather than severe mental and physical impairment. In Switzerland, for example, where soil does not contain a large amount of iodine, cases of congenital iodine deficiency syndrome were very abundant and even considered genetically caused. As the variety of food sources dramatically increased in Europe and North America and the populations became less completely dependent on locally grown food, the prevalence of endemic goitre diminished. This is supported by a 1979 WHO publication which concluded that "changes in the origin of food supplies may account for the otherwise unexplained disappearance of endemic goitre from a number of localities during the past 50 years".
The early 20th century saw the discovery of the relationships of neurological impairment with hypothyroidism due to iodine deficiency. Both have been largely eliminated in the developed world.
Terminology.
The term "cretin" was originally used to describe a person affected by this condition, but, as with words such as "spastic" and "lunatic", it underwent pejoration and is now considered derogatory and inappropriate. "Cretin" became a medical term in the 18th century, from an Occitan and an Alpine French expression, prevalent in a region where persons with such a condition were especially common (see below); it saw wide medical use in the 19th and early 20th centuries, and was a "tick box" category on Victorian-era census forms in the UK. The term spread more widely in popular English as a markedly derogatory term for a person who behaves stupidly. Because of its pejorative connotations in popular speech, current usage among health care professionals has abandoned the noun "cretin" referring to a person. The noun "cretinism", referring to the condition, still occurs in medical literature and textbooks but its use is waning.
The etymology of "cretin" is uncertain. Several hypotheses exist. The most common derivation provided in English dictionaries is from the Alpine French dialect pronunciation of the word "Chrétien" ("(a) Christian"), which was a greeting there. According to the "Oxford English Dictionary", the translation of the French term into "human creature" implies that the label "Christian" is a reminder of the humanity of the affected, in contrast to brute beasts. Other sources suggest that "Christian" describes the person's "Christ-like" inability to sin, stemming, in such cases, from an incapacity to distinguish right from wrong.
Other speculative etymologies have been offered:
|
6354
|
222151
|
https://en.wikipedia.org/wiki?curid=6354
|
Council of Trent
|
The Council of Trent (), held between 1545 and 1563 in Trent (or Trento), now in northern Italy, was the 19th ecumenical council of the Catholic Church. Prompted by the Protestant Reformation at the time, it has been described as the "most impressive embodiment of the ideals of the Counter-Reformation." It was the last time an ecumenical council was organized outside the city of Rome.
The Council issued key statements and clarifications of the Church's doctrine and teachings, including scripture, the biblical canon, sacred tradition, original sin, justification, salvation, the sacraments, the Mass, and the veneration of saints and also issued condemnations of what it defined to be heresies committed by proponents of Protestantism. The consequences of the council were also significant with regard to the Church's liturgy and censorship.
The Council met for twenty-five sessions between 13 December 1545 and 4 December 1563. Pope Paul III, who convoked the council, oversaw the first eight sessions (1545–1547), while the twelfth to sixteenth sessions (1551–52) were overseen by Pope Julius III and the seventeenth to twenty-fifth sessions (1562–63) by Pope Pius IV. More than three hundred years passed until the next ecumenical council, the First Vatican Council, was convened in 1869.
Background information.
Obstacles and events before the Council's problem area.
On 15 March 1517, the Fifth Council of the Lateran closed its activities with a number of reform proposals (on the selection of bishops, taxation, censorship and preaching) but not on the new major problems that confronted the Church in Germany and other parts of Europe. A few months later, on 31 October 1517, Martin Luther issued his "95 Theses" in Wittenberg.
A general, free council in Germany.
Luther's position on ecumenical councils shifted over time, but in 1520 he appealed to the German princes to oppose the papal Church at the time, if necessary with a council in Germany, open and free of the Papacy. After the Pope condemned in "Exsurge Domine" fifty-two of Luther's theses as heresy, German opinion considered a council the best method to reconcile existing differences. German Catholics, diminished in number, hoped for a council to clarify matters.
It took a generation for the council to materialise, partly due to papal fears over potentially renewing a schism over conciliarism; partly because Lutherans demanded the exclusion of the papacy from the council; partly because of ongoing political rivalries between France and the Holy Roman Empire; and partly due to the Turkish dangers in the Mediterranean. Under Pope Clement VII (1523–34), mutinous troops many of whom were Lutheran belonging to the Catholic Holy Roman Emperor Charles V sacked Papal Rome in 1527, "raping, killing, burning, stealing, the like had not been seen since the Vandals". Saint Peter's Basilica and the Sistine Chapel were used for horses. Pope Clement, fearful of the potential for more violence, delayed calling the council.
Charles V strongly favoured a council but needed the support of King Francis I of France, who attacked him militarily. Francis I generally opposed a general council due to partial support of the Protestant cause within France. Charles' younger brother Ferdinand of Austria, who ruled a huge swath of territory in central Europe, agreed in 1532 to the Nuremberg Religious Peace granting religious liberty to the Protestants, and in 1533 he further complicated matters when suggesting a general council to include both Catholic and Protestant rulers of Europe that would devise a compromise between the two theological systems. This proposal met the opposition of the Pope for it gave recognition to Protestants and also elevated the secular Princes of Europe above the clergy on church matters. Faced with a Turkish attack, Charles held the support of the Protestant German rulers, all of whom delayed the opening of the Council of Trent.
Occasion, sessions, and attendance.
In the to-and-fro of medieval politics, Pope Pius II, in his bull "Execrabilis" (1460) and his reply to the University of Cologne (1463), had set aside the theory of the supremacy of general councils laid down by the Council of Constance, which had also called for frequent ecumenical councils every ten years to cope with the backlog of reform and heresies.
Martin Luther had appealed for a general council, in response to the Papal bull "Exsurge Domine" of Pope Leo X (1520). In 1522 German diets joined in the appeal, with Charles V seconding and pressing for a council as a means of reunifying the Church and settling the Reformation controversies. Pope Clement VII (1523–34) was vehemently against the idea of a council, agreeing with Francis I of France.
Sessions.
The history of the council is divided into three distinct periods: 1545–1549, 1551–1552 and 1562–1563.
The number of attending members in the three periods varied considerably. The council was small to begin with, opening with only about 30 bishops. It increased toward the close, but never reached the number of the First Council of Nicaea (which had 318 members) nor of the First Vatican Council (which numbered 744). The decrees were signed in 1563 by 255 members, the highest attendance of the whole council, including four papal legates, two cardinals, three patriarchs, twenty-five archbishops, and 168 bishops, two-thirds of whom were Italians. The Italian and Spanish prelates were vastly preponderant in power and numbers. At the passage of the most important decrees, not more than sixty prelates were present. Although most Protestants did not attend, ambassadors and theologians of Brandenburg, Württemberg, and Strasbourg attended having been granted an improved safe conduct.
Pre-council.
Pope Paul III (1534–1549), seeing that the Protestant Reformation was no longer confined to a few preachers, but had won over various princes, especially in Germany, to its ideas, desired a council. Yet when he proposed the idea to his cardinals, it was almost unanimously opposed. Nonetheless, he sent nuncios throughout Europe to propose the idea. Paul III issued a decree for a general council to be held in Mantua, Italy, to begin on 23 May 1537. Martin Luther wrote the Smalcald Articles in preparation for the general council. The Smalcald Articles were designed to sharply define where the Lutherans could and could not compromise. The council was ordered by the Emperor and Pope Paul III to convene in Mantua on 23 May 1537.
It failed to convene after another war broke out between France and Charles V, resulting in a non-attendance of French prelates. Protestants refused to attend as well. Financial difficulties in Mantua led the Pope in the autumn of 1537 to move the council to Vicenza, where participation was poor. The council was postponed indefinitely on 21 May 1539.
Pope Paul III then initiated several internal Church reforms while Emperor Charles V convened with Protestants and Cardinal Gasparo Contarini at the Diet of Regensburg, to reconcile differences. Mediating and conciliatory formulations were developed on certain topics. In particular, a two-part doctrine of "justification" was formulated that would later be rejected at Trent. Unity failed between Catholic and Protestant representatives "because of different concepts of "Church" and "Justification"".
First period.
However, the council was delayed until 1545 and, as it happened, convened right before Luther's death. Unable, however, to resist the urging of Charles V, the pope, after proposing Mantua as the place of meeting, convened the council at Trent (at that time ruled by a prince-bishop under the Holy Roman Empire), on 13 December 1545; the Pope's decision to transfer it to Bologna in March 1547 on the pretext of avoiding a plague failed to take effect and the council was indefinitely prorogued on 17 September 1549. None of the three popes reigning over the duration of the council ever attended, which had been a condition of Charles V. Papal legates were appointed to represent the Papacy.
Second period.
Reopened at Trent on 1 May 1551 by the convocation of Pope Julius III (1550–1555), it was broken up by the sudden victory of Maurice, Elector of Saxony over Emperor Charles V and his march into surrounding state of Tirol on 28 April 1552. There was no hope of reassembling the council while the very anti-Protestant Paul IV was Pope.
During the second period, the Protestants present asked for a renewed discussion on points already defined and for bishops to be released from their oaths of allegiance to the Pope. When the last period began, all intentions of conciliating the Protestants was gone and the Jesuits had become a strong force. This last period was begun especially as an attempt to prevent the formation of a general council including Protestants, as had been demanded by some in France.
Third period.
The council was reconvened by Pope Pius IV (1559–1565) for the last time, meeting from 18 January 1562 at Santa Maria Maggiore, and continued until its final adjournment on 4 December 1563. It closed with a series of ritual acclamations honouring the reigning Pope, the Popes who had convoked the council, the emperor and the kings who had supported it, the papal legates, the cardinals, the ambassadors present, and the bishops, followed by acclamations of acceptance of the faith of the council and its decrees, and of anathema for all heretics.
The French monarchy boycotted the entire council until the last minute when a delegation led by Charles de Guise, Cardinal of Lorraine finally arrived in November 1562. The first outbreak of the French Wars of Religion had occurred earlier in the year and the French Church, facing a significant and powerful Protestant minority in France, experienced iconoclasm violence regarding the use of sacred images. Such concerns were not primary in the Italian and Spanish Churches. The last-minute inclusion of a decree on sacred images was a French initiative, and the text, never discussed on the floor of the council or referred to council theologians, was based on a French draft.
Objectives and overall results.
The main objectives of the council were twofold:
Specific issues that were discussed included:
The doctrinal decisions of the council were set forth in decrees ("decreta"), which are divided into chapters ("capita"), which contain the positive statement of the conciliar dogmas, and into short canons ("canones"), which condemn incorrect views (often a Protestant-associated notion stated in an extreme form) with the concluding "anathema sit" ("let him be anathema" i.e., excluded from the society of the faithful).
The consequences of the council were also significant with regard to the Church's liturgy and practices. In its decrees, the council made the Latin Vulgate the official biblical text of the Roman Church (without prejudice to the original texts in Hebrew and Greek, nor to other traditional translations of the Church, but favoring the Latin language over vernacular translations, such as the controversial English-language Tyndale Bible). In doing so, they commissioned the creation of a revised and standardized Vulgate in light of textual criticism, although this was not achieved until the 1590s. The council also officially affirmed the traditional Catholic Canon of biblical books, which was identical to the canon of Scripture issued by the Council of Rome under Pope Damasus in 382. This was in response to the increasing Protestant exclusion of the deuterocanonical books. The former dogmatic affirmation of the Canonical books was at the Council of Florence in the 1441 bull "Cantate Domino", as affirmed by Pope Leo XIII in his 1893 encyclical "Providentissimus Deus" (#20). In 1565, a year after the Council finished its work, Pius IV issued the Tridentine Creed (after "Tridentum", Trent's Latin name) and his successor Pius V then issued the Roman Catechism and revisions of the Breviary and Missal in, respectively, 1566, 1568 and 1570. These, in turn, led to the codification of the Tridentine Mass, which remained the Church's primary form of the Mass for the next four hundred years.
Decrees.
The doctrinal acts are as follows:
After reaffirming the Niceno-Constantinopolitan Creed (third session), the decree was passed (fourth session) confirming that the deuterocanonical books were on a par with the other books of the canon (against Luther's placement of these books in the Apocrypha of his edition) and coordinating church tradition with the Scriptures as a rule of faith. The Vulgate translation was affirmed to be authoritative for the text of Scripture.
Justification (sixth session) was declared to be offered upon the basis of human cooperation with divine grace (synergism) as opposed to the typical Protestant doctrine of passive reception of grace (monergism). Understanding the Protestant "faith alone" doctrine to be one of simple human confidence in Divine Mercy, the Council rejected the "vain confidence" of the Protestants, stating that no one can know infallibly who has received the grace of final perseverance apart from receiving a special revelation. Furthermore, the Council affirmed—against some Protestants—that the grace of God can be forfeited through mortal sin.
The greatest weight in the council's decrees is given to the sacraments. The seven sacraments were reaffirmed and the Eucharist pronounced to be a true propitiatory sacrifice as well as a sacrament, in which the bread and wine were consecrated into the Eucharist (thirteenth and twenty-second sessions). The term transubstantiation was used by the council, but the specific Aristotelian explanation given by Scholasticism was not cited as dogmatic. Instead, the decree states that Christ is "really, truly, substantially present" in the consecrated forms. The sacrifice of the Mass was to be offered for dead and living alike and in giving to the apostles the command "do this in remembrance of me," Christ conferred upon them a sacerdotal power. The practice of withholding the cup from the laity was confirmed (twenty-first session) as one which the Church Fathers had commanded for good and sufficient reasons; yet in certain cases the Pope was made the supreme arbiter as to whether the rule should be strictly maintained.
Ordination (twenty-third session) was defined to imprint an indelible character on the soul. The priesthood of the New Testament takes the place of the Levitical priesthood. To the performance of its functions, the consent of the people is not necessary.
In the decrees on marriage (twenty-fourth session) the excellence of the celibate state was reaffirmed, concubinage condemned and the validity of marriage made dependent upon the wedding taking place before a priest and two witnesses, although the lack of a requirement for parental consent ended a debate that had proceeded from the 12th century. In the case of a divorce, the right of the innocent party to marry again was denied so long as the other party was alive, even if the other party had committed adultery. However the council "refused … to assert the necessity or usefulness of clerical celibacy".
In the twenty-fifth and last session, the doctrines of purgatory, the invocation of saints and the veneration of relics were reaffirmed, as was also the efficacy of indulgences as dispensed by the Church according to the power given her, but with some cautionary recommendations, and a ban on the sale of indulgences. Short and rather inexplicit passages concerning religious images, were to have great impact on the development of Catholic Church art. Much more than the Second Council of Nicaea (787), the Council fathers of Trent stressed the pedagogical purpose of Christian images.
Baroque Art is in part a consequence of the Council of Trent more specifically its twenty-fifth session where it emphasized that sacred art should educate the faithful, inspire devotion, and accurately represent biblical narratives. All this led to a renewed focus on emotional engagement and clarity in religious paintings. Due to these new directives, the Catholic Church began to promote baroque art characterized by dramatic compositions, chiaroscuro, and theatrical gestures. The churches adoption of the art style would help to increase its spread of influence.
Practical.
On the language of the Mass, "contrary to what is often said", the council condemned the insistence that only vernacular languages must be used, while affirming on the use of Latin for the Roman rite. However, elements of the Prône, the vernacular catechetical preaching service common in the medieval High Mass (and some extra-liturgical situations) became mandatory for Sundays and feast days (fifth session, chapter 2).
The council appointed, in 1562 (eighteenth session), a commission to prepare a list of forbidden books ("Index Librorum Prohibitorum"), but it later left the matter to the Pope. The preparation of a catechism and the revision of the Breviary and Missal were also left to the pope. The catechism embodied the council's far-reaching results, including reforms and definitions of the sacraments, the Scriptures, church dogma, and duties of the clergy.
Ratification and promulgation.
On adjourning, the Council asked the supreme pontiff to ratify all its decrees and definitions. This petition was complied with by Pope Pius IV, on 26 January 1564, in the papal bull, "Benedictus Deus", which enjoins strict obedience upon all Catholics and forbids, under pain of ex-communication, all unauthorised interpretation, reserving this to the Pope alone and threatens the disobedient with "the indignation of Almighty God and of his blessed apostles, Peter and Paul." Pope Pius appointed a commission of cardinals to assist him in interpreting and enforcing the decrees.
The "Index Librorum Prohibitorum" was announced in 1564 and the following books were issued with the papal imprimatur: the Profession of the Tridentine Faith and the Tridentine Catechism (1566), the Breviary (1568), the Missal (1570) and the Vulgate (1590 and then 1592).
The decrees of the council were acknowledged in Italy, Portugal, Poland and by the Catholic princes of Germany at the Diet of Augsburg in 1566. Philip II of Spain accepted them for Spain, the Netherlands and Sicily inasmuch as they did not infringe the royal prerogative. In France, they were officially recognised by the king only in their doctrinal parts. Although the disciplinary or moral reformatory decrees were never published by the throne, they received official recognition at provincial synods and were enforced by the bishops. Holy Roman Emperors Ferdinand I and Maximilian II never recognized the existence of any of the decrees. No attempt was made to introduce it into England. Pius IV sent the decrees to Mary, Queen of Scots, with a letter dated 13 June 1564, requesting that she publish them in Scotland, but she dared not do it in the face of John Knox and the Reformation.
These decrees were later supplemented by the First Vatican Council of 1870.
Publication of documents.
A comprehensive history is found in Hubert Jedin's "The History of the Council of Trent (Geschichte des Konzils von Trient)" with about 2,500 pages in four volumes: "The History of the Council of Trent: The fight for a Council" (Vol I, 1951); "The History of the Council of Trent: The first Sessions in Trent (1545–1547)" (Vol II, 1957); "The History of the Council of Trent: Sessions in Bologna 1547–1548 and Trento 1551–1552" (Vol III, 1970, 1998); "The History of the Council of Trent: Third Period and Conclusion" (Vol IV, 1976).
The canons and decrees of the council have been published very often and in many languages. The first issue was by Paulus Manutius (Rome, 1564). Commonly used Latin editions are by Judocus Le Plat (Antwerp, 1779) and by Johann Friedrich von Schulte and Aemilius Ludwig Richter (Leipzig, 1853). Other editions are in vol. vii. of the "Acta et decreta conciliorum recentiorum. Collectio Lacensis" (7 vols., Freiburg, 1870–90), reissued as independent volume (1892); "Concilium Tridentinum: Diariorum, actorum, epistularum, … collectio", ed. Sebastianus Merkle (4 vols., Freiburg, 1901 sqq.); as well as Mansi, "Concilia", xxxv. 345 sqq. Note also Carl Mirbt, "Quellen", 2d ed, pp. 202–255. An English edition is by James Waterworth (London, 1848; "With Essays on the External and Internal History of the Council").
The original acts and debates of the council, as prepared by its general secretary, Bishop Angelo Massarelli, in six large folio volumes, are deposited in the Vatican Library and remained there unpublished for more than 300 years and were brought to light, though only in part, by Augustin Theiner, priest of the oratory (d. 1874), in "Acta genuina sancti et oecumenici Concilii Tridentini nunc primum integre edita" (2 vols., Leipzig, 1874).
Most of the official documents and private reports, however, which bear upon the council, were made known in the 16th century and since. The most complete collection of them is that of J. Le Plat, "Monumentorum ad historicam Concilii Tridentini collectio" (7 vols., Leuven, 1781–87). New materials (Vienna, 1872); by JJI von Döllinger ("Ungedruckte Berichte und Tagebücher zur Geschichte des Concilii von Trient", 2 parts, Nördlingen, 1876); and August von Druffel, "Monumenta Tridentina" (Munich, 1884–97).
Protestant response.
Out of 87 books written between 1546 and 1564 attacking the Council of Trent, 41 were written by Pier Paolo Vergerio, a former papal nuncio turned Protestant Reformer. The 1565–73 "Examen decretorum Concilii Tridentini" ("Examination of the Council of Trent") by Martin Chemnitz was the main Lutheran response to the Council of Trent. Making extensive use of scripture and patristic sources, it was presented in response to a polemical writing which Diogo de Payva de Andrada had directed against Chemnitz. The "Examen" had four parts: Volume I examined sacred scripture, free will, original sin, justification, and good works. Volume II examined the sacraments, including baptism, confirmation, the sacrament of the Eucharist, communion under both kinds, the Mass, penance, extreme unction, holy orders, and matrimony. Volume III examined virginity, celibacy, purgatory, and the invocation of saints. Volume IV examined the relics of the saints, images, indulgences, fasting, the distinction of foods, and festivals.
In response, Andrada wrote the five-part "Defensio Tridentinæ fidei", which was published posthumously in 1578. However, the "Defensio" did not circulate as extensively as the "Examen", nor were full translations initially published. A French translation of the "Examen" by Eduard Preuss was published in 1861. German translations were published in 1861, 1884, and 1972. In English, a complete translation by Fred Kramer drawing from the original Latin and the 1861 German was published beginning in 1971.
|
6355
|
27823944
|
https://en.wikipedia.org/wiki?curid=6355
|
Chloroplast
|
A chloroplast () is a type of organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. Chloroplasts have a high concentration of chlorophyll pigments which capture the energy from sunlight and convert it to chemical energy and release oxygen. The chemical energy created is then used to make sugar and other organic molecules from carbon dioxide in a process called the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in some unicellular algae, up to 100 in plants like "Arabidopsis" and wheat.
Chloroplasts are highly dynamic—they circulate and are moved around within cells. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts cannot be made anew by the plant cell and must be inherited by each daughter cell during cell division, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell.
Chloroplasts evolved from an ancient cyanobacterium that was engulfed by an early eukaryotic cell. Because of their endosymbiotic origins, chloroplasts, like mitochondria, contain their own DNA separate from the cell nucleus. With one exception (the amoeboid "Paulinella chromatophora"), all chloroplasts can be traced back to a single endosymbiotic event. Despite this, chloroplasts can be found in extremely diverse organisms that are not directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events.
Discovery and etymology.
The first definitive description of a chloroplast ("Chlorophyllkörnen", "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell. In 1883, Andreas Franz Wilhelm Schimper named these bodies as "chloroplastids" ("Chloroplastiden"). In 1884, Eduard Strasburger adopted the term "chloroplasts" ("Chloroplasten").
The word "chloroplast" is derived from the Greek words "chloros" (χλωρός), which means green, and "plastes" (πλάστης), which means "the one who forms".
Endosymbiotic origin of chloroplasts.
Chloroplasts are one of many types of organelles in photosynthetic eukaryotic cells. They evolved from cyanobacteria through a process called organellogenesis. Cyanobacteria are a diverse phylum of gram-negative bacteria capable of carrying out oxygenic photosynthesis. Like chloroplasts, they have thylakoids. The thylakoid membranes contain photosynthetic pigments, including chlorophyll "a". This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Franz Wilhelm Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and some species of the amoeboid "Paulinella".
Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed.
Primary endosymbiosis.
Approximately twobillion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in and persist inside the cell. This event is called "endosymbiosis", or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the "host" while the internal cell is called the "endosymbiont". The engulfed cyanobacteria provided an advantage to the host by providing sugar from photosynthesis. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. Some of the cyanobacterial proteins were then synthesized by host cell and imported back into the chloroplast (formerly the cyanobacterium), allowing the host to control the chloroplast.
Chloroplasts which can be traced back directly to a cyanobacterial ancestor (i.e. without a subsequent endosymbiotic event) are known as primary plastids ("plastid" in this context means almost the same thing as chloroplast). Chloroplasts that can be traced back to another photosynthetic eukaryotic endosymbiont are called secondary plastids or tertiary plastids (discussed below).
Whether primary chloroplasts came from a single endosymbiotic event or multiple independent engulfments across various eukaryotic lineages was long debated. It is now generally held that with one exception (the amoeboid "Paulinella chromatophora"), chloroplasts arose from a single endosymbiotic event around twobillion years ago and these chloroplasts all share a single ancestor. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is "Gloeomargarita lithophora." Separately, somewhere about 90–140 million years ago, this process happened again in the amoeboid "Paulinella" with a cyanobacterium in the genus "Prochlorococcus". This independently evolved chloroplast is often called a "chromatophore" instead of a chloroplast.
Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called "serial endosymbiosis"—where an early eukaryote engulfed the mitochondrion ancestor, and then descendants of it then engulfed the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria.
Secondary and tertiary endosymbiosis.
Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga with a primary chloroplast. These chloroplasts are known as secondary plastids.
As a result of the secondary endosymbiotic event, secondary chloroplasts have additional membranes outside of the original two in primary chloroplasts. In secondary plastids, typically only the chloroplast, and sometimes its cell membrane and nucleus remain, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane.
The genes in the phagocytosed eukaryote's nucleus are often transferred to the secondary host's nucleus. Cryptomonads and chlorarachniophytes retain the phagocytosed eukaryote's nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast.
All secondary chloroplasts come from green and red algae. No secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote.
Still other organisms, including the dinoflagellates "Karlodinium" and "Karenia," obtained chloroplasts by engulfing an organism with a secondary plastid. These are called tertiary plastids.
Primary chloroplast lineages.
All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the rhodophyte ("red") chloroplast lineage, and the chloroplastidan ("green") chloroplast lineage, the amoeboid "Paulinella chromatophora" lineage. The glaucophyte, rhodophyte, and chloroplastidian lineages are all descended from the same ancestral endosymbiotic event and are all within the group Archaeplastida.
Glaucophyte chloroplasts.
The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages as there are only 25 described glaucophyte species. Glaucophytes diverged first before the red and green chloroplast lineages diverged. Because of this, they are sometimes considered intermediates between cyanobacteria and the red and green chloroplasts. This early divergence is supported by both phylogenetic studies and physical features present in glaucophyte chloroplasts and cyanobacteria, but not the red and green chloroplasts. First, glaucophyte chloroplasts have a peptidoglycan wall, a type of cell wall otherwise only in bacteria (including cyanobacteria). Second, glaucophyte chloroplasts contain concentric unstacked thylakoids which surround a carboxysome – an icosahedral structure that contains the enzyme RuBisCO responsible for carbon fixation. Third, starch created by the chloroplast is collected outside the chloroplast. Additionally, like cyanobacteria, both glaucophyte and rhodophyte thylakoids are studded with light collecting structures called phycobilisomes.
Rhodophyta (red chloroplasts).
The rhodophyte, or red algae, group is a large and diverse lineage. Rhodophyte chloroplasts are also called "rhodoplasts", literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll "a" and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll "a" and other pigments, many are reddish to purple from the combination. The red phycoerytherin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga.
Chloroplastida (green chloroplasts).
The chloroplastida group is another large, highly diverse lineage that includes both green algae and land plants. This group is also called Viridiplantae, which includes two core clades—Chlorophyta and Streptophyta.
Most green chloroplasts are green in color, though some aren't due to accessory pigments that override the green from chlorophylls, such as in the resting cells of "Haematococcus pluvialis". Green chloroplasts differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll "b". They have also lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants have kept some genes required the synthesis of peptidoglycan, but have repurposed them for use in chloroplast division instead. Chloroplastida lineages also keep their starch "inside" their chloroplasts. In plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts, as well as those of hornworts, contain a structure called a pyrenoid, that concentrate RuBisCO and CO in the chloroplast, functionally similar to the glaucophyte carboxysome.
There are some lineages of non-photosynthetic parasitic green algae that have lost their chloroplasts entirely, such as "Prototheca," or have no chloroplast while retaining the separate chloroplast genome, as in "Helicosporidium." Morphological and physiological similarities, as well as phylogenetics, confirm that these are lineages that ancestrally had chloroplasts but have since lost them.
"Paulinella chromatophora".
The photosynthetic amoeboids in the genus "Paulinella—P. chromatophora, P. micropora," and marine "P. longichromatophora—"have the only known independently evolved chloroplast, often called a chromatophore. While all other chloroplasts originate from a single ancient endosymbiotic event, "Paulinella" independently acquired an endosymbiotic cyanobacterium from the genus "Synechococcus" around 90 – 140 million years ago. Each "Paulinella" cell contains one or two sausage-shaped chloroplasts; they were first described in 1894 by German biologist Robert Lauterborn.
The chromatophore is highly reduced compared to its free-living cyanobacterial relatives and has limited functions. For example, it has a genome of about 1 million base pairs, one third the size of "Synechococcus" genomes, and only encodes around 850 proteins. However, this is still much larger than other chloroplast genomes, which are typically around 150,000 base pairs. Chromatophores have also transferred much less of their DNA to the nucleus of their hosts. About 0.3–0.8% of the nuclear DNA in "Paulinella" is from the chromatophore, compared with 11–14% from the chloroplast in plants. Similar to other chloroplasts, "Paulinella" provides specific proteins to the chromatophore using a specific targeting sequence. Because chromatophores are much younger compared to the canoncial chloroplasts, "Paulinella chromatophora" is studied to understand how early chloroplasts evolved.
Secondary and tertiary chloroplast lineages.
Green algal derived chloroplasts.
Green algae have been taken up by many groups in three or four separate events. Primarily, secondary chloroplasts derived from green algae are in the euglenids and chlorarachniophytes. They are also found in one lineage of dinoflagellates and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast.
Euglenophytes.
The euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophytes are the only group outside Diaphoretickes that have chloroplasts without performing kleptoplasty. Euglenophyte chloroplasts have three membranes. It is thought that the membrane of the primary endosymbiont host was lost (e.g. the green algal membrane), leaving the two cyanobacterial membranes and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. The carbon fixed through photosynthesis is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte.
Chlorarachniophytes.
Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a "red" algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast.
Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm.
Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplastid space, which corresponds to the green alga's cytoplasm.
Prasinophyte-derived chloroplast.
Dinoflagellates in the genus "Lepidodinium" have lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). "Lepidodinium" is the only dinoflagellate that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast).
Tripartite symbiosis.
The ciliate "Pseudoblepharisma tenue" has two bacterial symbionts, one pink, one green. In 2021, both symbionts were confirmed to be photosynthetic: Ca. "Thiodictyon intracellulare" (Chromatiaceae), a purple sulfur bacterium with a genome just half the size of their closest known relatives; and "Chlorella" sp. K10, a green alga. There is also a variant of "Pseudoblepharisma tenue" that only contains chloroplasts from green algae and no endosymbiotic purple bacteria.
Red algal derived chloroplasts.
Secondary chloroplasts derived from red algae appear to have only been taken up only once, which then diversified into a large group called chromists or chromalveolates. Today they are found in the haptophytes, cryptomonads, heterokonts, dinoflagellates and apicomplexans (the CASH lineage). Red algal secondary chloroplasts usually contain chlorophyll c and are surrounded by four membranes.
However, chromist monophyly has been rejected, and it is considered more likely that some chromists acquired their plastids by incorporating another chromist instead of inheriting them from a common ancestor. Cryptophytes seem to have acquired plastids from red algae, which were then transmitted from them to both the Heterokontophytes and the Haptophytes, and then from these last to the Myzozoa.
Cryptophytes.
Cryptophytes, or cryptomonads, are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes. The outermost membrane is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the ancestral red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Cryptophyte chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in the thylakoid space, rather than anchored on the outside of their thylakoid membranes.
Cryptophytes may have played a key role in the spreading of red algal based chloroplasts.
Haptophytes.
Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which are stored in granules completely outside of the chloroplast, in the cytoplasm of the haptophyte.
Stramenopiles (heterokontophytes).
The stramenopiles, also known as heterokontophytes, are a very large and diverse group of eukaryotes. It inlcludes Ochrophyta—which includes diatoms, brown algae (seaweeds), and golden algae (chrysophytes)— and Xanthophyceae (also called yellow-green algae).
Heterokont chloroplasts are very similar to haptophyte chloroplasts. They have a pyrenoid, triplet thylakoids, and, with some exceptions, four layer plastidic envelope with the outermost membrane connected to the endoplasmic reticulum. Like haptophytes, stramenopiles store sugar in chrysolaminarin granules in the cytoplasm. Stramenopile chloroplasts contain chlorophyll "a" and, with a few exceptions, chlorophyll "c". They also have carotenoids which give them their many colors.
Apicomplexans, chromerids, and dinophytes.
The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid.
Apicomplexans.
Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include "Plasmodium", the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. Other apicomplexans like "Cryptosporidium" have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic.
The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle.
Chromerids.
The chromerids are a group of algae known from Australian corals which comprise some close photosynthetic relatives of the apicomplexans. The first member, "Chromera velia", was discovered and first isolated in 2001. The discovery of "Chromera velia" with similar structure to the apicomplexans, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event.
Dinophytes.
The dinoflagellates are yet another very large and diverse group, around half of which are at least partially photosynthetic (i.e. mixotrophic). Dinoflagellate chloroplasts have relatively complex history. Most dinoflagellate chloroplasts are secondary red algal derived chloroplasts. Many dinoflagellates have lost the chloroplast (becoming nonphotosynthetic), some of these have replaced it though "tertiary" endosymbiosis. Others replaced their original chloroplast with a green algal derived chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages.
The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll "a" and chlorophyll "c"2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. Peridinin chloroplasts also have DNA that is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast.
Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll "a", chlorophyll "c2", "beta"-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three.
Haptophyte-derived chloroplasts.
The fucoxanthin dinophyte lineages (including "Karlodinium" and "Karenia") lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont, making these tertiary plastids. "Karlodinium" and "Karenia" probably took up different endosymbionts. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it.
Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry.
Diatom-derived chloroplasts.
Some dinophytes, like "Kryptoperidinium" and "Durinskia", have a diatom (heterokontophyte)-derived chloroplast. These chloroplasts are bounded by up to "five" membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been "expanded". Diatoms have been engulfed by dinoflagellates at least three times.
The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids.
In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot.
Kleptoplasty.
In some groups of mixotrophic protists, like some dinoflagellates (e.g. "Dinophysis"), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced.
Cryptophyte-derived dinophyte chloroplast.
Members of the genus "Dinophysis" have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and "Dinophysis" species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the "Dinophysis" chloroplast is a kleptoplast—if so, "Dinophysis" chloroplasts wear out and "Dinophysis" species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones.
Chloroplast DNA.
Chloroplasts, like other endosymbiotic organelles, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA (cpDNA) was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. Chloroplast DNA was first sequenced in 1986. Since then, hundreds of chloroplast genomes from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content.
Molecular structure.
With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long and a mass of about 80–130 million daltons. While chloroplast genomes can almost always be assembled into a circular map, the physical DNA molecules inside cells take on a variety of linear and branching forms. New chloroplasts may contain up to 100 copies of their genome, though the number of copies decreases to about 15–20 as the chloroplasts age.
Chloroplast DNA is usually condensed into nucleoids, which can contain multiple copies of the chloroplast genome. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Chloroplast DNA is not associated with true histones, proteins that are used to pack DNA molecules tightly in eukaryote nuclei. Though in red algae, similar proteins tightly pack each chloroplast DNA ring in a nucleoid.
Many chloroplast genomes contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). A given pair of inverted repeats are rarely identical, but they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. The inverted repeat regions are highly conserved in land plants, and accumulate few mutations.
Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast. Some chloroplast genomes have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast genomes which have lost some of the inverted repeat segments tend to get rearranged more.
DNA repair and replication.
In chloroplasts of the moss "Physcomitrella patens", the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant "Arabidopsis thaliana" the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage.
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism.
Gene content and protein synthesis.
The ancestral cyanobacteria that led to chloroplasts probably had a genome that contained over 3000 genes, but only approximately 100 genes remain in contemporary chloroplast genomes. These genes code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs).
Among land plants, the contents of the chloroplast genome are fairly similar.
Chloroplast genome reduction and gene transfer.
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called "endosymbiotic gene transfer". As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in "Arabidopsis", corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called "retrograde signaling". Recent research indicates that parts of the retrograde signaling network once considered characteristic for land plants emerged already in an algal progenitor, integrating into co-expressed cohorts of genes in the closest algal relatives of land plants.
Protein synthesis.
Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
Protein targeting and import.
Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway.
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a "cleavable transit peptide" that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein.
Transport proteins and membrane translocons.
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences.
Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast.
From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or translocon on the outer chloroplast membrane", and the TIC translocon, or translocon on the inner chloroplast membrane translocon". Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Structure.
In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 μm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., "Oedogonium"), a cup (e.g., "Chlamydomonas"), a ribbon-like spiral around the edges of the cell (e.g., "Spirogyra"), or slightly twisted bands at the cell edges (e.g., "Sirogonium"). Some algae have two chloroplasts in each cell; they are star-shaped in "Zygnema", or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of "Chlorella" have a cup-shaped chloroplast that occupies much of the cell.
All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats.
There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes.
The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can be considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion.
Outer chloroplast membrane.
The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or "translocon on the outer chloroplast" membrane.
The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts.
Intermembrane space and peptidoglycan wall.
Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes.
Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called "muroplasts" (from Latin "mura", meaning "wall"). Other chloroplasts were assumed to have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes, but has since been found also in moss, lycophytes and ferns.
Inner chloroplast membrane.
The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex "(translocon on the inner chloroplast membrane)" which is located in the inner chloroplast membrane.
In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized.
Peripheral reticulum.
Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space.
Stroma.
The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma.
Chloroplast ribosomes.
Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features.
Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for Shine-Dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. An additional 4.5S rRNA with homology to the 3' tail of 23S is found in "higher" plants.
Plastoglobuli.
Plastoglobuli (singular "plastoglobulus", sometimes spelled "plastoglobule(s)"), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts.
Plastoglobuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls.
Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid.
Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglobuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones.
Starch granules.
Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane.
Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate.
Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules.
RuBisCO.
The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants.
Pyrenoids.
The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo".
Thylakoid system.
Thylakoids (sometimes spelled "thylakoïds"), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word "thylakoid" comes from the Greek word "thylakos" which means "sack".
Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen.
In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating.
Thylakoid structure.
Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana.
In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick.
The three-dimensional structure of the thylakoid membrane system has been disputed. Many models have been proposed, the most prevalent being the helical model, in which granum stacks of thylakoids are wrapped by helical stromal thylakoids. Another model known as the 'bifurcation model', which was based on the first electron tomography study of plant thylakoid membranes, depicts the stromal membranes as wide lamellar sheets perpendicular to the grana columns which bifurcates into multiple parallel discs forming the granum-stroma assembly. The helical model was supported by several additional works, but ultimately it was determined in 2019 that features from both the helical and bifurcation models are consolidated by newly discovered left-handed helical membrane junctions. Likely for ease, the thylakoid system is still commonly depicted by older "hub and spoke" models where the grana are connected to each other by tubes of stromal thylakoids.
Grana consist of a stacks of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are multiple parallel right-handed helical stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of ~20°, connecting to each granal thylakoid at a bridge-like slit junction.
The stroma lamellae extend as large sheets perpendicular to the grana columns. These sheets are connected to the right-handed helices either directly or through bifurcations that form left-handed helical membrane surfaces. The left-handed helical surfaces have a similar tilt angle to the right-handed helices (~20°), but ¼ the pitch. Approximately 4 left-handed helical junctions are present per granum, resulting in a pitch-balanced array of right- and left-handed helical membrane surfaces of different radii and pitch that consolidate the network with minimal surface and bending energies. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth.
Thylakoid composition.
Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine.
There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture.
In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids.
The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal.
Pigments and chloroplast colors.
Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis.
box-shadow: 1px 1px 3px rgba(0,0,0,0.2);">
Paper chromatography of some spinach leaf extract shows the various pigments present in their chloroplasts.
Xanthophylls
Chlorophyll "a"
Chlorophyll "b"
Chlorophylls.
Chlorophyll "a" is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll "a" is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll "b", chlorophyll "c", chlorophyll "d", and chlorophyll "f".
Chlorophyll "b" is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls "a" and "b" together that make most plant and green algal chloroplasts green.
Chlorophyll "c" is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll "c" is also found in some green algae and cyanobacteria.
Chlorophylls "d" and "f" are pigments found only in some cyanobacteria.
Carotenoids.
In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll "a". Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts.
Phycobilins.
Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead.
Specialized chloroplasts in plants.
To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO.
plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf.
As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called " photosynthesis". The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains.
Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts.
Function and chemistry.
Guard cell chloroplasts.
Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts. However, exactly what they do is controversial.
Plant innate immunity.
Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast.
Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence.
Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant.
In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection.
Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus.
In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate.
Photosynthesis.
One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) are made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+).
Light reactions.
The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions.
Energy carriers.
ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH.
Photophosphorylation.
Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion, gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions.
NADP+ reduction.
Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions.
Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms.
Cyclic photophosphorylation.
While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH.
Dark reactions.
The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast.
While named "the dark reactions", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions.
Carbon fixation and G3P synthesis.
The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA.
The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions.
Sugars and starches.
Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm.
Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast.
Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact.
Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis.
While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor.
Photorespiration.
Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism.
pH.
Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8.
The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3.
CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much.
Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system.
In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit.
Amino acid synthesis.
Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol.
Other nitrogen compounds.
Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides.
Other chemical products.
The plastid is the site of diverse and complex lipid synthesis in plants. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds.
The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration.
Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites.
Location.
Distribution in a plant.
Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts as the color comes from the chlorophyll. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts.
In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf.
Cellular location.
Chloroplast movement.
The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones.
Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move.
In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption.
Studies of "Vallisneria gigantea", an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place.
Differentiation, replication, and inheritance.
Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common.
In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana.
If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts.
Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in.
Plastid interconversion.
Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplastid are not absolute; state—intermediate forms are common.
Division.
Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells.
In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast.
Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells.
Much of what we know about chloroplast division comes from studying organisms like "Arabidopsis" and the red alga "Cyanidioschyzon merolæ".
The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form.
Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like "Cyanidioschyzon merolæ", chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space.
Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts.
Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts.
A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts.
Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms.
Regulation.
In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown.
Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts.
Chloroplast inheritance.
Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants.
Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring.
Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally.
Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo.
Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance.
Transplastomic plants.
Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000.
|
6357
|
48930484
|
https://en.wikipedia.org/wiki?curid=6357
|
Camp David
|
Camp David is a country retreat for the president of the United States. It lies in the wooded hills of Catoctin Mountain Park, in Frederick County, Maryland, near the towns of Thurmont and Emmitsburg, about north-northwest of the national capital city, Washington, D.C. It is code-named Naval Support Facility Thurmont. Technically a military installation, it is staffed primarily by the Seabees, the Civil Engineer Corps (CEC), the United States Navy, and the United States Marine Corps. Naval construction battalions are tasked with Camp David construction and send detachments as needed.
Originally known as Hi-Catoctin, Camp David was built as a retreat for federal government agents and their families by the Works Progress Administration. Construction started in 1935 and was completed in 1938. In 1942, President Franklin D. Roosevelt converted it to a presidential retreat and renamed it "Shangri-La", after the fictional Himalayan paradise. Camp David received its present name in 1953 from President Dwight D. Eisenhower, in honor of his father and his grandson, both named David.
The Catoctin Mountain Park does not indicate the location of Camp David on park maps due to privacy and security concerns, although it can be seen through the use of publicly accessible satellite images, and is also viewable on certain public web mapping services like Google Maps.
Presidential use.
Camp David has been used to host private diplomatic meetings with foreign leaders and heads of state since at least World War II. Franklin D. Roosevelt hosted Winston Churchill at Shangri-La in May 1943, during World War II. Dwight Eisenhower held his first cabinet meeting there on November 22, 1955, following hospitalization and convalescence he required after a heart attack suffered in Denver, Colorado, on September 24. Eisenhower met Nikita Khrushchev there for two days of discussions in September 1959.
John F. Kennedy and his family often enjoyed riding and other recreational activities there, and Kennedy often allowed White House staff and Cabinet members to use the retreat when he or his family were not there. Lyndon B. Johnson met with advisors in this setting and hosted both Australian prime minister Harold Holt and Canadian prime minister Lester B. Pearson there. Richard Nixon was a frequent visitor. He personally directed the construction of a swimming pool and other improvements to Aspen Lodge. Gerald Ford hosted Indonesian president Suharto at Camp David.
Jimmy Carter initially favored closing Camp David in order to save money, but once he visited the retreat, he decided to keep it. Carter brokered the Camp David Accords there in September 1978 between Egyptian president Anwar al-Sadat and Israeli prime minister Menachem Begin. Ronald Reagan visited the retreat more than any other president. In 1984, Reagan hosted British prime minister Margaret Thatcher. Reagan restored the nature trails that Nixon paved over so he could horseback ride at Camp David. George H. W. Bush's daughter, Dorothy Bush Koch, was married there in 1992, in the first wedding held at Camp David. During his tenure as president, Bill Clinton spent every Thanksgiving at Camp David with his family. In July 2000, he hosted the 2000 Camp David Summit negotiations between Israeli prime minister Ehud Barak and Palestinian Authority chairman Yasser Arafat there.
In February 2001, George W. Bush held his first meeting with a European leader, UK prime minister Tony Blair, at Camp David, to discuss missile defense, Iraq, and NATO. After the September 11 attacks, Bush held a Cabinet meeting at Camp David to prepare the United States invasion of Afghanistan. During his two terms in office, Bush visited Camp David 149 times, for a total of 487 days, for hosting foreign visitors as well as a personal retreat. He met Blair there four times. Among the numerous other foreign leaders he hosted at Camp David were Russian president Vladimir Putin and President Musharraf of Pakistan in 2003, Danish prime minister Anders Fogh Rasmussen in June 2006, and British prime minister Gordon Brown in 2007.
Barack Obama chose Camp David to host the 38th G8 summit in 2012. President Obama also hosted Russian prime minister Dmitry Medvedev at Camp David, as well as the GCC Summit there in 2015.
Donald Trump hosted Senate majority leader Mitch McConnell and Speaker of the House Paul Ryan at Camp David while the Republican Party prepared to defend both houses of Congress in the 2018 midterm elections. Trump also planned to meet with the Taliban at Camp David to negotiate a peace agreement in 2019, but refrained after a suicide bombing in Kabul killed US troops. The 46th G7 summit was to be held at Camp David on June 10–12, 2020, but was cancelled due to health concerns during what was at the time considered the height of the COVID-19 pandemic.
Joe Biden hosted the U.S.–Japan–Korea Summit with Japanese prime minister Fumio Kishida and South Korean president Yoon Suk Yeol at Camp David in August 2023, resulting in the declaration of the Camp David Principles on trilateral relations between the U.S., Japan, and South Korea.
Practice golf facility.
To be able to play his favorite sport, President Eisenhower had golf course architect Robert Trent Jones design a practice golf facility at Camp David. Around 1954, Jones built one golf hole—a par 3—with four different tees; Eisenhower added a driving range near the helicopter landing zone.
Security incidents.
On July 2, 2011, an F-15 intercepted a civilian aircraft approximately from Camp David, when President Obama was in the residence. The two-seater, which was out of radio communication, was escorted to nearby Hagerstown, Maryland, without incident.
On July 10, 2011, an F-15 intercepted another small plane near Camp David when Obama was again in the residence; a total of three were intercepted that weekend.
|
6359
|
42021989
|
https://en.wikipedia.org/wiki?curid=6359
|
Crux
|
Crux () is a constellation of the southern sky that is centred on four bright stars in a cross-shaped asterism commonly known as the Southern Cross. It lies on the southern end of the Milky Way's visible band. The name "Crux" is Latin for cross. Though it is the smallest of all 88 modern constellations, Crux is among the most easily distinguished, as each of its four main stars has an apparent visual magnitude brighter than +2.8. It has attained a high level of cultural significance in many Southern Hemisphere states and nations.
Blue-white α Crucis (Acrux) is the most southerly member of the constellation, and at magnitude 0.8, the brightest. The three other stars of the cross appear clockwise and in order of lessening magnitude: β Crucis (Mimosa), γ Crucis (Gacrux), and δ Crucis (Imai). ε Crucis (Ginan) also lies within the cross asterism. Many of these brighter stars are members of the Scorpius–Centaurus association, a large but loose group of hot, blue-white stars that appear to share common origins and motion across the southern Milky Way.
Crux contains four Cepheid variables, each visible to the naked eye under optimum conditions. Crux also contains the bright and colourful open cluster known as the Jewel Box (NGC 4755) on its eastern border. Nearby to the southeast is a large dark nebula spanning 7° by 5° known as the Coalsack Nebula, portions of which are mapped in the neighbouring constellations of Centaurus and Musca.
History.
The bright stars in Crux were known to the Ancient Greeks, where Ptolemy regarded them as part of the constellation Centaurus. They were entirely visible as far north as Britain in the fourth millennium BC. However, the precession of the equinoxes gradually lowered the stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. By 400 AD, the stars in the constellation now called Crux never rose above the horizon throughout most of Europe. Dante may have known about the constellation in the 14th century, as he describes an asterism of four bright stars in the southern sky in his "Divine Comedy". His description, however, may be allegorical, and the similarity to the constellation a coincidence.
Venetian navigator Alvise Cadamosto in the 15th century made note of what was probably the Southern Cross on exiting the Gambia River in 1455, calling it the "carro dell'ostro" ("southern chariot"). However, Cadamosto's accompanying diagram was inaccurate. Historians generally credit João Faras for being the first European to depict it correctly. Faras sketched and described the constellation (calling it "las guardas") in a letter written on the beaches of Brazil on 1 May 1500 to the Portuguese monarch.
Explorer Amerigo Vespucci seems to have observed not only the Southern Cross, but also the neighboring Coalsack Nebula on his second voyage in 1501–1502.
Another early modern description clearly describing Crux as a separate constellation is attributed to Andrea Corsali, an Italian navigator who from 1515 to 1517 sailed to China and the East Indies in an expedition sponsored by King Manuel I. In 1516, Corsali wrote a letter to the monarch describing his observations of the southern sky, which included a rather crude map of the stars around the south celestial pole, including the Southern Cross and the two Magellanic Clouds seen in an external orientation, as on a globe.
Emery Molyneux and Petrus Plancius have also been cited as the first uranographers (sky mappers) to distinguish Crux as a separate constellation; their representations date from 1592, the former depicting it on his celestial globe and the latter in one of the small celestial maps on his large wall map. Both authors, however, depended on unreliable sources and placed Crux in the wrong position. Crux was first shown in its correct position on the celestial globes of Petrus Plancius and Jodocus Hondius in 1598 and 1600. Its stars were first catalogued separately from Centaurus by Frederick de Houtman in 1603. The constellation was later adopted by Jakob Bartsch in 1624 and Augustin Royer in 1679. Royer is sometimes wrongly cited as initially distinguishing Crux.
Characteristics.
Crux is bordered by the constellations Centaurus (which surrounds it on three sides) on the east, north, and west, and Musca to the south. Covering 68 square degrees and 0.165% of the night sky, it is the smallest of the 88 constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −55.68° and −64.70°. Its totality figures at least part of the year south of the 25th parallel north.
In tropical regions, Crux can be seen in the sky from April to June. Crux is exactly opposite to Cassiopeia on the celestial sphere, so it cannot appear in the sky with the latter at the same time. In this era, south of Cape Town, Adelaide, and Buenos Aires (the 34th parallel south), Crux is circumpolar, thus always appears in the sky.
Crux is sometimes confused with the nearby False Cross asterism by stargazers. The False Cross consists of stars in Carina and Vela, is larger and dimmer, does not have a fifth star, and lacks the two prominent nearby "Pointer Stars". Between the two is the even larger and dimmer Diamond Cross.
Visibility.
Crux is easily visible from the Southern Hemisphere, south of 35th parallel at practically any time of year as circumpolar. It is also visible near the horizon from tropical latitudes of the Northern Hemisphere for a few hours every night during the northern winter and spring. For instance, it is visible from Cancun or any other place at latitude 25° N or less at around 10 pm at the end of April. There are 5 main stars.
Due to precession, Crux will move closer to the South Pole in the next few millennia, up to 67° south declination for the middle of the constellation. However, by the year 14,000, Crux will be visible for most parts of Europe and the continental United States. Its visibility will extend to Northern Europe by 18,000, when it will be less than 30° south declination.
Use in navigation.
In the Southern Hemisphere, the Southern Cross is frequently used for navigation in much the same way that Polaris is used in the Northern Hemisphere. Projecting a line from γ to α Crucis (the foot of the crucifix) about times beyond gives a point close to the Southern Celestial Pole which is also, coincidentally, where it intersects a perpendicular line taken southwards from the east–west axis of Alpha Centauri to Beta Centauri, which are stars at an alike declination to Crux and of a similar width as the cross, but higher magnitude. Argentine "gauchos" are documented as using Crux for night orientation in the Pampas and Patagonia.
Alpha and Beta Centauri are of similar declinations (thus distance from the pole) and are often referred as the "Southern Pointers" or just "the Pointers", allowing people to easily identify the Southern Cross, the constellation of Crux. Very few bright stars lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately south of Crux.
Bright stars.
Down to apparent magnitude +2.5 are 92 stars that shine the brightest as viewed from the Earth. Three of these stars are in Crux, making it the most densely populated as to those stars (this being 3.26% of these 92 stars, and in turn being 19.2 times more than the expected 0.17% that would result on a homogenous distribution of all bright stars and a randomised drawing of all 88 constellations, given its area, 0.17% of the sky).
Features.
Stars.
Within the constellation's borders, 49 stars are brighter than or equal to apparent magnitude 6.5. The four main stars that form the asterism are Alpha, Beta, Gamma, and Delta Crucis.
Also, a fifth star is often included with the Southern Cross.
Several other naked-eye stars are within the borders of Crux, especially:
Scorpius–Centaurus association.
Unusually, 15 of the 23 brightest stars in Crux are spectrally blue-white B-type stars. Among the five main bright stars, Delta, and probably Alpha and Beta, are likely co-moving B-type members of the Scorpius–Centaurus association, the nearest OB association to the Sun. They are among the highest-mass stellar members of the Lower Centaurus–Crux subgroup of the association, with ages of roughly 10 to 20 million years. Other members include the blue-white stars Zeta, Lambda, and both the components of the visual double star, Mu.
Variable stars.
Crux contains many variable stars. It boasts four Cepheid variables that may all reach naked eye visibility.
Other well-studied variable stars include:
Host star exoplanets in Crux.
The star HD 106906 has been found to have a planet—HD 106906 b—that has one of the widest orbits of any currently known planetary-mass companions.
Objects beyond the Local Arm.
Crux is backlit by the multitude of stars of the Scutum-Crux Arm (more commonly called the Scutum-Centaurus Arm) of the Milky Way. This is the main inner arm in the local radial quarter of the galaxy. Partly obscuring this is:
A key feature of the Scutum-Crux Arm is:
Cultural significance.
The most prominent feature of Crux is the distinctive asterism known as the Southern Cross. It has great significance in the cultures of the Southern Hemisphere, particularly of Australia, Brazil, Chile, and New Zealand.
Flags and symbols.
Several southern countries and organisations have traditionally used Crux as a national or distinctive symbol. The four or five brightest stars of Crux appear, heraldically standardised in various ways, on the flags of Australia, Brazil, New Zealand, Papua New Guinea, and Samoa. They also appear on the flags of the Australian state of Victoria, the Australian Capital Territory, and the Northern Territory, as well as the flag of Magallanes Region of Chile, the flag of Londrina (Brazil) and several Argentine provincial flags and emblems (for example, Tierra del Fuego and Santa Cruz). The flag of the Mercosur trading zone displays the four brightest stars. Crux also appears on the Brazilian coat of arms, and , on the cover of Brazilian passports.
Five stars appear in the logo of the Brazilian football team Cruzeiro Esporte Clube and in the insignia of the Order of the Southern Cross, and the cross has featured as name of the Brazilian currency (the "cruzeiro" from 1942 to 1986 and again from 1990 to 1994). All coins of the (1998) series of the Brazilian real display the constellation.
Songs and literature reference the Southern Cross, including the Argentine epic poem "Martín Fierro". The Argentinian singer Charly García says that he is "from the Southern Cross" in the song "No voy en tren".
The cross gets a mention in the lyrics of the Brazilian National Anthem (1909): "A imagem do Cruzeiro resplandece" ("the image of the Cross shines").
The Southern Cross is mentioned in the Australian National Anthem, ""Beneath our radiant Southern Cross we'll toil with hearts and hands"
The Southern Cross features in the coat of arms of William Birdwood, 1st Baron Birdwood, the British officer who commanded the Australian and New Zealand Army Corps during the Gallipoli Campaign of the First World War.
The Southern Cross is also mentioned in the Samoan
National Anthem.
"Vaai 'i na fetu o lo'u a agiagia ai: Le faailoga lea o Iesu, na maliu ai mo Samoa."" ("Look at those stars that are waving on it: This is the symbol of Jesus, who died on it for Samoa.")
The 1952-53 NBC Television Series "Victory At Sea" contained a musical number entitled "Beneath the Southern Cross".
"Southern Cross" is a single released by Crosby, Stills and Nash in 1981. It reached number 18 on "Billboard "Hot 100 in late 1982.
"The Sign of the Southern Cross" is a song released by Black Sabbath in 1981. The song was released on the album "Mob Rules".
The Order of the Southern Cross is a Brazilian order of chivalry awarded to "those who have rendered significant service to the Brazilian nation".
In "O Sweet Saint Martin's Land", the lyrics mention the Southern Cross: "Thy Southern Cross the night".
A stylized version of Crux appears on the Australian Eureka Flag. The constellation was also used on the dark blue, shield-like patch worn by personnel of the U.S. Army's Americal Division, which was organized in the Southern Hemisphere, on the island of New Caledonia, and also on the blue diamond of the U.S. 1st Marine Division, which fought on the Southern Hemisphere islands of Guadalcanal and New Britain.
The "Petersflagge" flag of the German East Africa Company of 1885–1920, which included a constellation of five white, five-pointed Crux "stars" on a red ground, later served as the model for symbolism associated with generic German colonial-oriented organisations: the Reichskolonialbund of 1936–1943 and the (1956/1983 to the present).
Southern Cross station is a major rail terminal in Melbourne, Australia.
The Personal Ordinariate of Our Lady of the Southern Cross is a personal ordinariate of the Roman Catholic Church primarily within the territory of the Australian Catholic Bishops Conference for groups of Anglicans who desire full communion with the Catholic Church in Australia and Asia.
The Knights of the Southern Cross (KSC) is a Catholic fraternal order throughout Australia.
Various cultures.
In India, a story relates the creation of Trishanku Swarga (त्रिशंकु), meaning "Cross" (Crux), created by Sage Vishwamitra.
In Chinese, (), meaning "Cross", refers to an asterism consisting of γ Crucis, α Crucis, β Crucis, and δ Crucis.
In Australian Aboriginal astronomy, Crux and the Coalsack mark the head of the Emu in the Sky (which is seen in the dark spaces rather than in the patterns of stars) in several Aboriginal cultures, while Crux itself is said to be a possum sitting in a tree (Boorong people of the Wimmera region of northwestern Victoria), a representation of the sky deity Mirrabooka (Quandamooka people of Stradbroke Island), a stingray (Yolngu people of Arnhem Land), or an eagle (Kaurna people of the Adelaide Plains). Two Pacific constellations also included Gamma Centauri. Torres Strait Islanders in modern-day Australia saw Gamma Centauri as the handle and the four stars as the left hand of Tagai, and the stars of Musca as the trident of the fishing spear he is holding. In Aranda traditions of central Australia, the four Cross stars are the talon of an eagle and Gamma Centauri as its leg.
Various peoples in the East Indies and Brazil viewed the four main stars as the body of a ray. In both Indonesia and Malaysia, it is known as "Bintang Pari" and "Buruj Pari", respectively ("ray stars"). This aquatic theme is also shared by an archaic name of the constellation in Vietnam, where it was once known as "sao Cá Liệt" (the ponyfish star).
Among Filipino people, the Southern Cross has various names pertaining to tops, including "kasing" (Visayan languages), "paglong" (Bikol), and "pasil" (Tagalog). It is also called "butiti" (puffer fish) in Waray.
The Javanese people of Indonesia called this constellation "Gubug pèncèng" ("raking hut") or "lumbung" ("the granary"), because the shape of the constellation was like that of a raking hut.
The Southern Cross (α, β, γ, and δ Crucis) together with μ Crucis, is one of the asterisms used by Bugis sailors for navigation, called "bintoéng bola képpang", meaning "incomplete house star"
The Māori name for the Southern Cross is "Māhutonga" and it is thought of as the anchor ("Te Punga") of Tama-rereti's "waka" (the Milky Way), while the Pointers are its rope. In Tonga it is known as "Toloa" ("duck"); it is depicted as a duck flying south, with one of his wings (δ Crucis) wounded because "Ongo tangata" ("two men", α and β Centauri) threw a stone at it. The Coalsack is known as "Humu" (the "triggerfish"), because of its shape. In Samoa the constellation is called "Sumu" ("triggerfish") because of its rhomboid shape, while α and β Centauri are called "Luatagata" (Two Men), just as they are in Tonga. The peoples of the Solomon Islands saw several figures in the Southern Cross. These included a knee protector and a net used to catch Palolo worms. Neighboring peoples in the Marshall Islands saw these stars as a fish. Peninsular Malays also see the likeness of a fish in the Crux, particularly the Scomberomorus or its local name "Tohok".
In Mapudungun, the language of Patagonian Mapuches, the name of the Southern Cross is "Melipal", which means "four stars". In Quechua, the language of the Inca civilization, Crux is known as "Chakana", which means literally "stair" ("chaka", bridge, link; "hanan", high, above), but carries a deep symbolism within Quechua mysticism. Alpha and Beta Crucis make up one foot of the Great Rhea, a constellation encompassing Centaurus and Circinus along with the two bright stars. The Great Rhea was a constellation of the Bororo of Brazil. The Mocoví people of Argentina also saw a rhea including the stars of Crux. Their rhea is attacked by two dogs, represented by bright stars in Centaurus and Circinus. The dogs' heads are marked by Alpha and Beta Centauri. The rhea's body is marked by the four main stars of Crux, while its head is Gamma Centauri and its feet are the bright stars of Musca. The Bakairi people of Brazil had a sprawling constellation representing a bird snare. It included the bright stars of Crux, the southern part of Centaurus, Circinus, at least one star in Lupus, the bright stars of Musca, Beta and the optical double star Delta1,2 Chamaeleontis: and some of the stars of Volans, and Mensa. The Kalapalo people of Mato Grosso state in Brazil saw the stars of Crux as "Aganagi" angry bees having emerged from the Coalsack, which they saw as the beehive.
Among Tuaregs, the four most visible stars of Crux are considered "iggaren", i.e. four "Maerua crassifolia" trees. The Tswana people of Botswana saw the constellation as "Dithutlwa", two giraffes – Alpha and Beta Crucis forming a male, and Gamma and Delta forming the female.
|
6362
|
42021989
|
https://en.wikipedia.org/wiki?curid=6362
|
Cetus
|
Cetus () is a constellation, sometimes called 'the whale' in English. The Cetus was a sea monster in Greek mythology which both Perseus and Heracles needed to slay. Cetus is in the region of the sky that contains other water-related constellations: Aquarius, Pisces and Eridanus.
Features.
Ecliptic.
Cetus is not among the 12 true zodiac constellations in the J2000 epoch, nor classical 12-part zodiac. The ecliptic passes less than 0.25° from one of its corners. Thus the Moon and planets will enter Cetus (occulting any stars as a foreground object) in 50% of their successive orbits briefly, and the southern part of the Sun appears in Cetus for about 14 hours each year on March 27 to 28. Many asteroids in belts have longer phases occulting the north-western part of Cetus, those with a slightly greater inclination to the ecliptic than the Moon and planets.
As seen from Mars, the ecliptic (apparent plane of the Sun and also the average plane of the planets which is almost the same) passes into it.
Stars.
Mira ("wonderful", named by Bayer: Omicron Ceti, a star of the neck of the asterism) was the first variable star to be discovered and the prototype of its class, Mira variables. Over a period of 332 days, it reaches a maximum apparent magnitude of 3 - visible to the naked eye - and dips to a minimum magnitude of 10, invisible to the unaided eye. Its seeming appearance and disappearance gave it its name. Mira pulsates with a minimum size of 400 solar diameters and a maximum size of 500 solar diameters. 420 light-years from Earth, it was discovered by David Fabricius in 1596.
α Ceti, traditionally called Menkar ("the nose"), is a red-hued giant star of magnitude 2.5, 220 light-years from Earth. It is a wide double star; the secondary is 93 Ceti, a blue-white hued star of magnitude 5.6, 440 light-years away. β Ceti, also called Deneb Kaitos and Diphda is the brightest star in Cetus. It is an orange-hued giant star of magnitude 2.0, 96 light-years from Earth. The traditional name "Deneb Kaitos" means "the whale's tail". γ Ceti, Kaffaljidhma ("head of the whale") is a very close double star. The primary is a blue-hued star of magnitude 3.5, 82 light-years from Earth, and the secondary is an F-type star of magnitude 6.6. Tau Ceti is noted for being a near Sun-like star at a distance of 11.9 light-years. It is a yellow-hued main-sequence star of magnitude 3.5.
AA Ceti is a triple star system; the brightest member has a magnitude of 6.2. The primary and secondary are separated by 8.4 arcseconds at an angle of 304 degrees. The tertiary is not visible in telescopes. AA Ceti is an eclipsing variable star; the tertiary star passes in front of the primary and causes the system's apparent magnitude to decrease by 0.5 magnitudes. UV Ceti is an unusual binary variable star. At 8.7 light-years from Earth, the system consists of two red dwarfs. Both of magnitude 13. One of the stars is a flare star, which are prone to sudden, random outbursts that last several minutes; these increase the pair's apparent brightness significantly - as high as magnitude 7.
Deep-sky objects.
Cetus lies far from the galactic plane, so that many distant galaxies are visible, unobscured by dust from the Milky Way. Of these, the brightest is Messier 77 (NGC 1068), a 9th magnitude spiral galaxy near Delta Ceti. It appears face-on and has a clearly visible nucleus of magnitude 10. About 50 million light-years from Earth, M77 is also a Seyfert galaxy and thus a bright object in the radio spectrum. Recently, the galactic cluster JKCS 041 was confirmed to be the most distant cluster of galaxies yet discovered. The Pisces–Cetus Supercluster Complex is a galaxy filament that is one of the largest known structures in the observable Universe; it contains the Virgo Supercluster which contains the Local Group of Milky Way and other galaxies.
The massive cD galaxy Holmberg 15A is also found in Cetus; as are the spiral galaxy NGC 1042, the elliptical galaxy NGC 1052 and the ultra-diffuse galaxy NGC 1052-DF2.
IC 1613 (Caldwell 51) is an irregular dwarf galaxy near the star 26 Ceti and is a member of the Local Group.
NGC 246 (Caldwell 56), also called the "Cetus Ring", is a planetary nebula with a magnitude of 8.0 at 1600 light-years from Earth. Among some amateur astronomers, NGC 246 has garnered the nickname "Pac-Man Nebula" because of the arrangement of its central stars and the surrounding star field.
The Wolf–Lundmark–Melotte (WLM) is a barred irregular galaxy discovered in 1909 by Max Wolf, located on the outer edges of the Local Group. The discovery of the nature of the galaxy was accredited to Knut Lundmark and Philibert Jacques Melotte in 1926.
UGC 1646, which is a spiral galaxy, also lies between the borders of the constellation. It is about 150 million light-years away from us. It can be seen near TYC 43-234-1 star.
History and mythology.
Cetus may have originally been associated with a whale, which would have had mythic status amongst Mesopotamian cultures. It is often now called the Whale, though it is most strongly associated with Cetus the sea-monster, who was slain by Perseus as he saved the princess Andromeda from Poseidon's wrath. It is in the middle of "The Sea" recognised by mythologists, a set of water-associated constellations, its other members being Eridanus, Pisces, Piscis Austrinus and Aquarius.
Cetus has been depicted in many ways throughout its history. In the 17th century, Cetus was depicted as a "dragon fish" by Johann Bayer, while both Willem Blaeu and Andreas Cellarius depicted Cetus as a whale-like creature in the same century. However, Cetus has also been variously depicted with animal heads attached to a piscine body.
In global astronomy.
In Chinese astronomy, the stars of Cetus are found among two areas: the Black Tortoise of the North (北方玄武, "Běi Fāng Xuán Wǔ") and the White Tiger of the West (西方白虎, "Xī Fāng Bái Hǔ").
The Tukano and Kobeua people of the Amazon used the stars of Cetus to create a jaguar, representing the god of hurricanes and other violent storms. Lambda, Mu, Xi, Nu, Gamma, and Alpha Ceti represented its head; Omicron, Zeta, and Chi Ceti represented its body; Eta Eri, Tau Cet, and Upsilon Cet marked its legs and feet; and Theta, Eta, and Beta Ceti delineated its tail.
In Hawaii, the constellation was called "Na Kuhi", and Mira (Omicron Ceti) may have been called "Kane".
Namesakes.
USS "Cetus" (AK-77) was a United States Navy Crater class cargo ship named after the constellation.
|
6363
|
42021989
|
https://en.wikipedia.org/wiki?curid=6363
|
Carina (constellation)
|
Carina ( ) is a constellation in the southern sky. Its name is Latin for the keel of a ship, and it was the southern foundation of the larger constellation of Argo Navis (the ship "Argo") until it was divided into three pieces, the other two being Puppis (the poop deck), and Vela (the sails of the ship).
History and mythology.
Carina was once a part of Argo Navis, the great ship of the mythical Jason and the Argonauts who searched for the Golden Fleece. The constellation of Argo was introduced in ancient Greece. However, due to the massive size of Argo Navis and the sheer number of stars that required separate designation, Nicolas-Louis de Lacaille divided Argo into three sections in 1763, including Carina (the hull or keel). In the 19th century, these three became established as separate constellations, and were formally included in the list of 88 modern IAU constellations in 1930. Lacaille kept a single set of Greek letters for the whole of Argo, and separate sets of Latin letter designations for each of the three sections. Therefore, Carina has the α, β and ε, Vela has γ and δ, Puppis has ζ, and so on.
Notable features.
Stars.
Carina contains Canopus, a white-hued supergiant that is the second-brightest star in the night sky at magnitude −0.72. Alpha Carinae, as Canopus is formally designated, is 313 light-years from Earth. Its traditional name comes from the mythological Canopus, who was a navigator for Menelaus, king of Sparta.
There are several other stars above magnitude 3 in Carina. Beta Carinae, traditionally called Miaplacidus, is a blue-white-hued star of magnitude 1.7, 111 light-years from Earth. Epsilon Carinae is an orange-hued giant star similarly bright to Miaplacidus at magnitude 1.9; it is 630 light-years from Earth. Another fairly bright star is the blue-white-hued Theta Carinae; it is a magnitude 2.7 star 440 light-years from Earth. Theta Carinae is also the most prominent member of the cluster IC 2602. Iota Carinae is a white-hued supergiant star of magnitude 2.2, 690 light-years from Earth.
Eta Carinae is the most prominent variable star in Carina, with a mass of approximately 100 solar masses and 4 million times as bright as the Sun. It was first discovered to be unusual in 1677, when its magnitude suddenly rose to 4, attracting the attention of Edmond Halley. Eta Carinae is inside NGC 3372, commonly called the Carina Nebula. It had a long outburst in 1827, when it brightened to magnitude 1, only fading to magnitude 1.5 in 1828. Its most prominent outburst made Eta Carinae the equal of Sirius; it brightened to magnitude −1.5 in 1843. In the decades following 1843 it appeared relatively placid, having a magnitude between 6.5 and 7.9. However, in 1998, it brightened again, though only to magnitude 5.0, a far less drastic outburst. Eta Carinae is a binary star, with a companion that has a period of 5.5 years; the two stars are surrounded by the Homunculus Nebula, which is composed of gas that was ejected in 1843.
There are several less prominent variable stars in Carina. l Carinae is a Cepheid variable noted for its brightness; it is the brightest Cepheid that is variable to the unaided eye. It is a yellow-hued supergiant star with a minimum magnitude of 4.2 and a maximum magnitude of 3.3; it has a period of 35.5 days.
V382 Carinae is a yellow hypergiant, one of the rarest types of stars. It is a slow irregular variable, with a minimum magnitude of 4.05 and a maximum magnitude of 3.77. As a hypergiant, V382 Carinae is a luminous star, with 212,000 times more luminosity than the Sun and over 480 times the Sun's size.
Two bright Mira variable stars are in Carina: R Carinae and S Carinae; both stars are red giants. R Carinae has a minimum magnitude of 10.0 and a maximum magnitude of 4.0. Its period is 309 days and it is 416 light-years from Earth. S Carinae is similar, with a minimum magnitude of 10.0 and a maximum magnitude of 5.0. However, S Carinae has a shorter period—150 days, though it is much more distant at 1,300 light-years from Earth.
Carina is home to several double stars and binary stars. Upsilon Carinae is a binary star with two blue-white-hued giant components, 1,600 light-years from Earth. The primary is of magnitude 3.0 and the secondary is of magnitude 6.0; the two components are distinguishable in a small amateur telescope.
Two asterisms are prominent in Carina. The 'Diamond Cross' is composed of the stars Beta, Theta, Upsilon and Omega Carinae. The Diamond Cross is visible south of 20ºN latitude, and is larger but fainter than the Southern Cross in Crux. Flanking the Diamond Cross is the False Cross, composed of four stars, with two stars in Carina, Iota Carinae and Epsilon Carinae, and two stars in Vela, Kappa Velorum and Delta Velorum. It is often mistaken for the Southern Cross, causing errors in astronavigation.
Deep-sky objects.
Carina is known for its namesake nebula, NGC 3372, discovered by French astronomer Nicolas-Louis de Lacaille in 1751, which contains several nebulae. The Carina Nebula overall is an extended emission nebula approximately 8,000 light-years away and 300 light-years wide that includes vast star-forming regions. It has an overall magnitude of 8.0 and an apparent diameter of over 2 degrees. Its central region is called the Keyhole, or the Keyhole Nebula. This was described in 1847 by John Herschel, and likened to a keyhole by Emma Converse in 1873. The Keyhole is about seven light-years wide and is composed mostly of ionized hydrogen, with two major star-forming regions. The Homunculus Nebula is a planetary nebula visible to the naked eye that is being ejected by the erratic luminous blue variable star Eta Carinae, the most massive visible star known. Eta Carinae is so massive that it has reached the theoretical upper limit for the mass of a star and is therefore unstable. It is known for its outbursts; in 1840 it briefly became one of the brightest stars in the sky due to a particularly massive outburst, which largely created the Homunculus Nebula. Because of this instability and history of outbursts, Eta Carinae is considered a prime supernova candidate for the next several hundred thousand years because it has reached the end of its estimated million-year life span.
NGC 2516 is an open cluster that is both quite large (approximately half a degree square) and bright, visible to the unaided eye. It is located 1,100 light-years from Earth and has approximately 80 stars, the brightest of which is a red giant star of magnitude 5.2. NGC 3114 is another open cluster approximately of the same size, though it is more distant at 3,000 light-years from Earth. It is more loose and dim than NGC 2516, as its brightest stars are only 6th magnitude. The most prominent open cluster in Carina is IC 2602, also called the "Southern Pleiades". It contains Theta Carinae, along with several other stars visible to the unaided eye. In total, the cluster possesses approximately 60 stars. The Southern Pleiades is particularly large for an open cluster, with a diameter of approximately one degree. Like IC 2602, NGC 3532 is visible to the unaided eye and is of comparable size. It possesses approximately 150 stars that are arranged in an unusual shape, approximating an ellipse with a dark central area. Several prominent orange giants are among the cluster's bright stars, of the 7th magnitude. Superimposed on the cluster is Chi Carinae, a yellow-white-hued star of magnitude 3.9, far more distant than NGC 3532.
Carina also contains the naked-eye globular cluster NGC 2808. Epsilon Carinae and Upsilon Carinae are double stars visible in small telescopes.
One noted galaxy cluster is 1E 0657-56, the Bullet Cluster. At a distance of 4 billion light-years (redshift 0.296), this galaxy cluster is named for the shock wave seen in the intracluster medium, which resembles the shock wave of a supersonic bullet. The bow shock visible is thought to be due to the smaller galaxy cluster moving through the intracluster medium at a relative speed of 3,000–4,000 kilometers per second to the larger cluster. Because this gravitational interaction has been ongoing for hundreds of millions of years, the smaller cluster is being destroyed and will eventually merge with the larger cluster.
Meteors.
Carina contains the radiant of the Eta Carinids meteor shower, which peaks around January 21 each year.
Equivalents.
From China (especially northern China), the stars of Carina can barely be seen. The star Canopus (the south polar star in Chinese astronomy) was located by Chinese astronomers in the Vermilion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"). The rest of the stars were first classified by Xu Guanggi during the Ming dynasty, based on the knowledge acquired from western star charts, and placed among The Southern Asterisms (近南極星區, "Jìnnánjíxīngōu").
Polynesian peoples had no name for the constellation in particular, though they had many names for Canopus.
The Māori name "Ariki" ("High-born"), and the Hawaiian "Ke Alii-o-kona-i-ka-lewa", "The Chief of the southern expanse" both attest to the star's prominence in the southern sky, while the Māori "Atutahi", "First-light" or "Single-light", and the Tuamotu "Te Tau-rari" and "Marere-te-tavahi", "He who stands alone". refer to the star's solitary nature.
It was also called "Kapae-poto" ("Short horizon"), because it rarely sets from the vantage point of New Zealand, and "Kauanga" ("Solitary"), when it was the last star visible before sunrise.
Future.
Carina is in the southern sky quite near the south celestial pole, making it never set (circumpolar) for most of the southern hemisphere. Due to precession of Earth's axis, by the year 4700 the south celestial pole will be in Carina. Three bright stars in Carina will come within 1 degree of the southern celestial pole and take turns as the southern pole star: Omega Carinae (mag 3.29) in 5600, Upsilon Carinae (mag 2.97) in 6700, and Iota Carinae (mag 2.21) in 7900. About 13,860 CE, the bright Canopus (−0.7) will have a greater declination than −82°.
Namesakes.
was a United States Navy "Crater"-class cargo ship named after the constellation.
the Toyota Carina was named after it.
|
6364
|
42021989
|
https://en.wikipedia.org/wiki?curid=6364
|
Camelopardalis
|
Camelopardalis is a large but faint constellation of the northern sky representing a giraffe. The constellation was introduced in 1612 or 1613 by Petrus Plancius. Some older astronomy books give Camelopardalus or Camelopardus as alternative forms of the name, but the version recognized by the International Astronomical Union matches the genitive form, seen suffixed to most of its brighter stars.
Etymology.
First attested in English in 1785, the word "camelopardalis" comes from Latin, and it is the romanization of the Greek "καμηλοπάρδαλις" meaning "giraffe", from "κάμηλος" ("kamēlos"), "camel" + "πάρδαλις" ("pardalis"), "spotted", because it has a long neck like a camel and spots like a leopard.
Features.
Stars.
Although Camelopardalis is the 18th largest constellation, it is not a particularly bright constellation, as the brightest stars are only of fourth magnitude. In fact, it only contains four stars brighter than magnitude 5.0.
Other variable stars are U Camelopardalis, VZ Camelopardalis, and Mira variables T Camelopardalis, X Camelopardalis, and R Camelopardalis. RU Camelopardalis is one of the brighter Type II Cepheids visible in the night sky.
In 2011 a supernova was discovered in the constellation.
Deep-sky objects.
Camelopardalis is in the part of the celestial sphere facing away from the galactic plane. Accordingly, many distant galaxies are visible within its borders.
Meteor showers.
The annual May meteor shower Camelopardalids from comet 209P/LINEAR have a radiant in Camelopardalis.
History.
Camelopardalis is not one of Ptolemy's 48 constellations in the "Almagest". It was created by Petrus Plancius in 1613. It first appeared in a globe designed by him and produced by Pieter van den Keere. One year later, Jakob Bartsch featured it in his atlas. Johannes Hevelius depicted this constellation in his works which were so influential that it was referred to as Camelopardali Hevelii or abbreviated as Camelopard. Hevel.
Part of the constellation was hived off to form the constellation Sciurus Volans, the Flying Squirrel, by William Croswell in 1810. However this was not taken up by later cartographers.
Equivalents.
In Chinese astronomy, the stars of Camelopardalis are located within a group of circumpolar stars called the Purple Forbidden Enclosure (紫微垣 "Zǐ Wēi Yuán").
|
6365
|
1300269109
|
https://en.wikipedia.org/wiki?curid=6365
|
Convention of Kanagawa
|
The Convention of Kanagawa, also known as the or the , was a treaty signed between the United States and the Tokugawa Shogunate on March 31, 1854. Signed under threat of force, it effectively meant the end of Japan's 220-year-old policy of national seclusion () by opening the ports of Shimoda and Hakodate to American vessels. It also ensured the safety of American castaways and established the position of an American consul in Japan. The treaty precipitated the signing of similar treaties establishing diplomatic relations with other Western powers.
Isolation of Japan.
Since the beginning of the 17th century, the Tokugawa Shogunate pursued a policy of isolating the country from outside influences. Foreign trade was maintained only with the Dutch and the Chinese and was conducted exclusively at Nagasaki under a strict government monopoly. This "Pax Tokugawa" period is largely associated with domestic peace, social stability, commercial development, and expanded literacy. This policy had two main objectives:
By the early 19th century, this policy of isolation was increasingly under challenge. In 1844, King William II of the Netherlands sent a letter urging Japan to end the isolation policy on its own before change would be forced from the outside. In 1846, an official American expedition led by Commodore James Biddle arrived in Japan asking for ports to be opened for trade but was sent away.
Perry expedition.
In 1853, United States Navy Commodore Matthew C. Perry was sent with a fleet of warships by U.S. President Millard Fillmore to force the opening of Japanese ports to American trade, through the use of gunboat diplomacy if necessary. President Fillmore's letter shows the U.S. sought trade with Japan to open export markets for American goods like gold from California, enable U.S. ships to refuel in Japanese ports, and secure protections and humane treatment for any American sailors shipwrecked on Japan's shores. The growing commerce between America and China, the presence of American whalers in waters offshore Japan, and the increasing monopolization of potential coaling stations by the British and French in Asia were all contributing factors. The Americans were also driven by concepts of manifest destiny and the desire to impose the perceived benefits of western civilization and Christianity on what they perceived as backward Asian nations. From the Japanese perspective, increasing contacts with foreign warships and the increasing disparity between western military technology and the Japanese feudal armies fostered growing concern. The Japanese had been keeping abreast of world events via information gathered from Dutch traders in Dejima and had been forewarned by the Dutch of Perry's voyage. There was a considerable internal debate in Japan on how best to meet this potential threat to Japan's economic and political sovereignty in light of events occurring in China with the Opium Wars.
Perry arrived with four warships at Uraga, at the mouth of Edo Bay on July 8, 1853. He refused Japanese demands that he proceed to Nagasaki, which was the designated port for foreign contact. After threatening to continue directly on to Edo, the nation's capital, and to burn it to the ground if necessary, he was allowed to land at nearby Kurihama on July 14 and to deliver his letter. Such refusal was intentional, as Perry wrote in his journal: "To show these princes how little I regarded their order for me to depart, on getting on board I immediately ordered the whole squadron underway, not to leave the bay... but to go higher up... would produce a decided influence upon the pride and conceit of the government, and cause a more favorable consideration of the President's letter." Perry's power front did not stop with refusing to land in Uraga, but he continued to push the boundaries of the Japanese. He ordered the squadron to survey Edo bay, which led to a stand-off between Japanese officers with swords and Americans with guns. By firing the guns into the water, Perry demonstrated their military might, which affected Japanese perceptions of Perry and the United States.
Despite years of debate on the isolation policy, Perry's letter created great controversy within the highest levels of the Tokugawa shogunate. The shogun himself, Tokugawa Ieyoshi, died days after Perry's departure and was succeeded by his sickly young son, Tokugawa Iesada, leaving effective administration in the hands of the Council of Elders () led by Abe Masahiro. Abe felt that it was impossible for Japan to resist the American demands by military force and yet was reluctant to take any action on his own authority for such an unprecedented situation. Attempting to legitimize any decision taken, Abe polled all of the daimyo for their opinions. This was the first time that the Tokugawa shogunate had allowed its decision-making to be a matter of public debate and had the unforeseen consequence of portraying the shogunate as weak and indecisive. The results of the poll also failed to provide Abe with an answer; of the 61 known responses, 19 were in favour of accepting the American demands and 19 were opposed. Of the remainder, 14 gave vague responses expressing concern of possible war, 7 suggested making temporary concessions and 2 advised that they would simply go along with whatever was decided.
Perry returned again on February 11, 1854, with an even larger force of eight warships and made it clear that he would not be leaving until a treaty was signed. Perry continued his manipulation of the setting, such as keeping himself aloof from lower-ranking officials, implying the use of force, surveying the harbor, and refusing to meet in the designated negotiation sites. Negotiations began on March 8 and proceeded for around one month. Each party shared a performance when Perry arrived. The Americans had a technology demonstration, and the Japanese had a sumo wrestling show. While the new technology awed the Japanese people, Perry was unimpressed by the sumo wrestlers and perceived such performance as foolish and degrading: "This disgusting exhibition did not terminate until the whole twenty-five had, successively, in pairs, displayed their immense powers and savage qualities." The Japanese side gave in to almost all of Perry's demands, with the exception of a commercial agreement modelled after previous American treaties with China, which Perry agreed to defer to a later time. The main controversy centered on the selection of the ports to open, with Perry rejecting Nagasaki.
The treaty, written in English, Dutch, Chinese and Japanese, was signed on March 31, 1854, at what is now Kaikō Hiroba (Port Opening Square) Yokohama, a site adjacent to the current Yokohama Archives of History. The celebratory events for the signing ceremony included a Kabuki play from the Japanese side and, from the American side, U.S. military band music and blackface minstrelsy.
Treaty of Peace and Amity (1854).
The "Japan-US Treaty of Peace and Amity" has twelve articles:
At the time, shogun Tokugawa Iesada was the de facto ruler of Japan; for the Emperor of Japan to interact in any way with foreigners was out of the question. Perry concluded the treaty with representatives of the shogun, led by plenipotentiary and the text was endorsed subsequently, albeit reluctantly, by Emperor Kōmei.
The treaty was ratified on February 21, 1855.
Consequences of the treaty.
In the short term, the U.S. was content with the agreement since Perry had achieved his primary objective of breaking Japan's policy and setting the grounds for protection of American citizens and an eventual commercial agreement. On the other hand, the Japanese were forced into this trade, and many saw it as a sign of weakness. The Tokugawa shogunate could point out that the treaty was not actually signed by the shogun, or indeed any of his , and that it had at least averted the possibility of immediate military confrontation.
Externally, the treaty led to the United States-Japan Treaty of Amity and Commerce, the "Harris Treaty" of 1858, which allowed the establishment of foreign concessions, extraterritoriality for foreigners, and minimal import taxes for foreign goods. The Japanese chafed under the "unequal treaty system" which characterized Asian and western relations during this period. The Kanagawa treaty was also followed by similar agreements with the United Kingdom (Anglo-Japanese Friendship Treaty, October 1854), Russia (Treaty of Shimoda, February 7, 1855), and France (Treaty of Amity and Commerce between France and Japan, October 9, 1858).
Internally, the treaty had far-reaching consequences. Decisions to suspend previous restrictions on military activities led to re-armament by many domains and further weakened the position of the shogun. Debate over foreign policy and popular outrage over perceived appeasement to the foreign powers was a catalyst for the movement and a shift in political power from Edo back to the Imperial Court in Kyoto. The opposition of Emperor Kōmei to the treaties further lent support to the (overthrow the shogunate) movement, and eventually to the Meiji Restoration, which affected all realms of Japanese life. Following this period came an increase in foreign trade, the rise of Japanese military might, and the later rise of Japanese economic and technological advancement. Westernization at the time was a defense mechanism, but Japan has since found a balance between Western modernity and Japanese tradition.
|
6366
|
42021989
|
https://en.wikipedia.org/wiki?curid=6366
|
Canis Major
|
Canis Major is a constellation in the southern celestial hemisphere. In the second century, it was included in Ptolemy's 48 constellations, and is counted among the 88 modern constellations. Its name is Latin for "greater dog" in contrast to Canis Minor, the "lesser dog"; both figures are commonly represented as following the constellation of Orion the hunter through the sky. The Milky Way passes through Canis Major and several open clusters lie within its borders, most notably M41.
Canis Major contains Sirius, the brightest star in the night sky, known as the "dog star". It is bright because of its proximity to the Solar System and its intrinsic brightness. In contrast, the other bright stars of the constellation are stars of great distance and high luminosity. At magnitude 1.5, Epsilon Canis Majoris (Adhara) is the second-brightest star of the constellation and the brightest source of extreme ultraviolet radiation in the night sky. Next in brightness are the yellow-white supergiant Delta (Wezen) at 1.8, the blue-white giant Beta (Mirzam) at 2.0, blue-white supergiants Eta (Aludra) at 2.4 and Omicron2 at 3.0, and white spectroscopic binary Zeta (Furud), also at 3.0. The red hypergiant VY CMa is one of the largest stars known, while the neutron star RX J0720.4-3125 has a radius of a mere 5 km.
History and myths.
In western astronomy.
In ancient Mesopotamia, Sirius, named KAK.SI.SA2 by the Babylonians, was seen as an arrow aiming towards Orion, while the southern stars of Canis Major and a part of Puppis were viewed as a bow, named BAN in the "Three Stars Each" tablets, dating to around 1100 BC. In the later compendium of Babylonian astronomy and astrology titled "MUL.APIN", the arrow, Sirius, was also linked with the warrior Ninurta, and the bow with Ishtar, daughter of Enlil. Ninurta was linked to the later deity Marduk, who was said to have slain the ocean goddess Tiamat with a great bow, and worshipped as the principal deity in Babylon. The Ancient Greeks replaced the bow and arrow depiction with that of a dog.
In Greek Mythology, Canis Major represented the dog Laelaps, a gift from Zeus to Europa; or sometimes the hound of Procris, Diana's nymph; or the one given by Aurora to Cephalus, so famed for its speed that Zeus elevated it to the sky. It was also considered to represent one of Orion's hunting dogs, pursuing Lepus the Hare or helping Orion fight Taurus the Bull; and is referred to in this way by Aratos, Homer and Hesiod. The ancient Greeks refer only to one dog, but by Roman times, Canis Minor appears as Orion's second dog. Alternative names include Canis Sequens and Canis Alter. Canis Syrius was the name used in the 1521 "Alfonsine tables".
The Roman myth refers to Canis Major as "Custos Europae", the dog guarding Europa but failing to prevent her abduction by Jupiter in the form of a bull, and as "Janitor Lethaeus", "the watchdog". In medieval Arab astronomy, the constellation became "al-Kalb al-Akbar", "the Greater Dog", transcribed as "Alcheleb Alachbar" by 17th century writer Edmund Chilmead. Islamic scholar Abū Rayḥān al-Bīrūnī referred to Orion as "Kalb al-Jabbār", "the Dog of the Giant". Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called "Merzem", includes the stars of Canis Major and Canis Minor and is the herald of two weeks of hot weather.
In non-western astronomy.
In Chinese astronomy, the modern constellation of Canis Major is located in the Vermilion Bird (), where the stars were classified in several separate asterisms of stars. The Military Market () was a circular pattern of stars containing Nu3, Beta, Xi1 and Xi2, and some stars from Lepus. The Wild Cockerel () was at the centre of the Military Market, although it is uncertain which stars depicted what. Schlegel reported that the stars Omicron and Pi Canis Majoris might have been them, while Beta or Nu2 have also been proposed. Sirius was ' (), the Celestial Wolf, denoting invasion and plunder. Southeast of the Wolf was the asterism ' (), the celestial Bow and Arrow, which was interpreted as containing Delta, Epsilon, Eta and Kappa Canis Majoris and Delta Velorum. Alternatively, the arrow was depicted by Omicron2 and Eta and aiming at Sirius (the Wolf), while the bow comprised Kappa, Epsilon, Sigma, Delta and 164 Canis Majoris, and Pi and Omicron Puppis.
Both the Māori people and the people of the Tuamotus recognized the figure of Canis Major as a distinct entity, though it was sometimes absorbed into other constellations. ', also called ' and ', ("The Assembly of " or "The Assembly of Sirius") was a Māori constellation that included both Canis Minor and Canis Major, along with some surrounding stars. Related was ', also called ', the Mirror of , formed from an undefined group of stars in Canis Major. They called Sirius ' and ', corresponding to two of the names for the constellation, though ' was a name applied to other stars in various Māori groups and other Polynesian cosmologies. The Tuamotu people called Canis Major "", "the abiding assemblage of ".
The Tharumba people of the Shoalhaven River saw three stars of Canis Major as ' (Bat) and his two wives ' (Mrs Brown Snake) and ' (Mrs Black Snake); bored of following their husband around, the women try to bury him while he is hunting a wombat down its hole. He spears them and all three are placed in the sky as the constellation '. To the Boorong people of Victoria, Sigma Canis Majoris was ' (which has become the official name of this star), and its flanking stars Delta and Epsilon were his two wives. The moon (', "native cat") sought to lure the further wife (Epsilon) away, but assaulted him and he has been wandering the sky ever since.
Characteristics.
Canis Major is a constellation in the Southern Hemisphere's summer (or northern hemisphere's winter) sky, bordered by Monoceros (which lies between it and Canis Minor) to the north, Puppis to the east and southeast, Columba to the southwest, and Lepus to the west. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CMa". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a quadrilateral; in the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −11.03° and −33.25°. Covering 380 square degrees or 0.921% of the sky, it ranks 43rd of the 88 currently-recognized constellations in size.
Features.
Stars.
Canis Major is a prominent constellation because of its many bright stars. These include Sirius (Alpha Canis Majoris), the brightest star in the night sky, as well as three other stars above magnitude 2.0. Furthermore, two other stars are thought to have previously outshone all others in the night sky—Adhara (Epsilon Canis Majoris) shone at −3.99 around 4.7 million years ago, and Mirzam (Beta Canis Majoris) peaked at −3.65 around 4.42 million years ago. Another, NR Canis Majoris, will be brightest at magnitude −0.88 in about 2.87 million years' time.
The German cartographer Johann Bayer used the Greek letters Alpha through Omicron to label the most prominent stars in the constellation, including three adjacent stars as Nu and two further pairs as Xi and Omicron, while subsequent observers designated further stars in the southern parts of the constellation that were hard to discern from Central Europe. Bayer's countryman Johann Elert Bode later added Sigma, Tau and Omega; the French astronomer Nicolas Louis de Lacaille added lettered stars a to k (though none are in use today). John Flamsteed numbered 31 stars, with 3 Canis Majoris being placed by Lacaille into Columba as Delta Columbae (Flamsteed had not recognised Columba as a distinct constellation). He also labelled two stars—his 10 and 13 Canis Majoris—as Kappa1 and Kappa2 respectively, but subsequent cartographers such as Francis Baily and John Bevis dropped the fainter former star, leaving Kappa2 as the sole Kappa. Flamsteed's listing of Nu1, Nu2, Nu3, Xi1, Xi2, Omicron1 and Omicron2 have all remained in use.
Sirius is the brightest star in the night sky at apparent magnitude −1.46 and one of the closest stars to Earth at a distance of 8.6 light-years. Its name comes from the Greek word for "scorching" or "searing". Sirius is also a binary star; its companion Sirius B is a white dwarf with a magnitude of 8.4–10,000 times fainter than Sirius A to observers on Earth. The two orbit each other every 50 years. Their closest approach last occurred in 1993 and they will be at their greatest separation between 2020 and 2025. Sirius was the basis for the ancient Egyptian calendar. The star marked the Great Dog's mouth on Bayer's star atlas.
Flanking Sirius are Beta and Gamma Canis Majoris. Also called Mirzam or Murzim, Beta is a blue-white Beta Cephei variable star of magnitude 2.0, which varies by a few hundredths of a magnitude over a period of six hours. Mirzam is 500 light-years from Earth, and its traditional name means "the announcer", referring to its position as the "announcer" of Sirius, as it rises a few minutes before Sirius does. Gamma, also known as Muliphein, is a fainter star of magnitude 4.12, in reality a blue-white bright giant of spectral type B8IIe located 441 light-years from earth. Iota Canis Majoris, lying between Sirius and Gamma, is another star that has been classified as a Beta Cephei variable, varying from magnitude 4.36 to 4.40 over a period of 1.92 hours. It is a remote blue-white supergiant star of spectral type B3Ib, around 46,000 times as luminous as the sun and, at 2500 light-years distant, 300 times further away than Sirius.
Epsilon, Omicron2, Delta, and Eta Canis Majoris were called "Al Adzari" "the virgins" in medieval Arabic tradition. Marking the dog's right thigh on Bayer's atlas is Epsilon Canis Majoris, also known as Adhara. At magnitude 1.5, it is the second-brightest star in Canis Major and the 23rd-brightest star in the sky. It is a blue-white supergiant of spectral type B2Iab, around 404 light-years from Earth. This star is one of the brightest known extreme ultraviolet sources in the sky. It is a binary star; the secondary is of magnitude 7.4. Its traditional name means "the virgins", having been transferred from the group of stars to Epsilon alone. Nearby is Delta Canis Majoris, also called Wezen. It is a yellow-white supergiant of spectral type F8Iab and magnitude 1.84, around 1605 light-years from Earth. With a traditional name meaning "the weight", Wezen is 17 times as massive and 50,000 times as luminous as the Sun. If located in the centre of the Solar System, it would extend out to Earth as its diameter is 200 times that of the Sun. Only around 10 million years old, Wezen has stopped fusing hydrogen in its core. Its outer envelope is beginning to expand and cool, and in the next 100,000 years it will become a red supergiant as its core fuses heavier and heavier elements. Once it has a core of iron, it will collapse and explode as a supernova. Nestled between Adhara and Wezen lies Sigma Canis Majoris, known as Unurgunite to the Boorong and Wotjobaluk people, a red supergiant of spectral type K7Ib that varies irregularly between magnitudes 3.43 and 3.51.
Also called Aludra, Eta Canis Majoris is a blue-white supergiant of spectral type B5Ia with a luminosity 176,000 times and diameter around 80 times that of the Sun. Classified as an Alpha Cygni type variable star, Aludra varies in brightness from magnitude 2.38 to 2.48 over a period of 4.7 days. It is located 1120 light-years away. To the west of Adhara lies 3.0-magnitude Zeta Canis Majoris or Furud, around 362 light-years distant from Earth. It is a spectroscopic binary, whose components orbit each other every 1.85 years, the combined spectrum indicating a main star of spectral type B2.5V.
Between these stars and Sirius lie Omicron1, Omicron2, and Pi Canis Majoris. Omicron2 is a massive supergiant star about 21 times as massive as the Sun. Only 7 million years old, it has exhausted the supply of hydrogen at its core and is now processing helium. It is an Alpha Cygni variable that undergoes periodic non-radial pulsations, which cause its brightness to cycle from magnitude 2.93 to 3.08 over a 24.44-day interval. Omicron1 is an orange K-type supergiant of spectral type K2.5Iab that is an irregular variable star, varying between apparent magnitudes 3.78 and 3.99. Around 18 times as massive as the Sun, it shines with 65,000 times its luminosity.
North of Sirius lie Theta and Mu Canis Majoris, Theta being the most northerly star with a Bayer designation in the constellation. Around 8 billion years old, it is an orange giant of spectral type K4III that is around as massive as the Sun but has expanded to 30 times the Sun's diameter. Mu is a multiple star system located around 1244 light-years distant, its components discernible in a small telescope as a 5.3-magnitude yellow-hued and 7.1-magnitude bluish star. The brighter star is a giant of spectral type K2III, while the companion is a main sequence star of spectral type B9.5V. Nu1 Canis Majoris is a yellow-hued giant star of magnitude 5.7, 278 light-years away; it is at the threshold of naked-eye visibility. It has a companion of magnitude 8.1.
At the southern limits of the constellation lie Kappa and Lambda Canis Majoris. Although of similar spectra and nearby each other as viewed from Earth, they are unrelated. Kappa is a Gamma Cassiopeiae variable of spectral type B2Vne, which brightened by 50% between 1963 and 1978, from magnitude 3.96 or so to 3.52. It is around 659 light-years distant. Lambda is a blue-white B-type main sequence dwarf with an apparent magnitude of 4.48 located around 423 light-years from Earth. It is 3.7 times as wide as and 5.5 times as massive as the Sun, and shines with 940 times its luminosity.
Canis Major is also home to many variable stars. EZ Canis Majoris is a Wolf–Rayet star of spectral type WN4 that varies between magnitudes 6.71 and 6.95 over a period of 3.766 days; the cause of its variability is unknown but thought to be related to its stellar wind and rotation. VY Canis Majoris is a remote red hypergiant located approximately 3,800 light-years away from Earth. It is one of largest stars known (sometimes described as the largest known) and is also one of the most luminous with a radius varying from 1,420 to 2,200 times the Sun's radius, and a luminosity around 300,000 times greater than the Sun. Its current mass is about 17 ± 8 solar masses, having shed material from an initial mass of 25–32 solar masses. VY CMa is also surrounded by a red reflection nebula that has been made by the material expelled by the strong stellar winds of its central star. W Canis Majoris is a type of red giant known as a carbon star—a semiregular variable, it ranges between magnitudes 6.27 and 7.09 over a period of 160 days. A cool star, it has a surface temperature of around 2,900 K and a radius 234 times that of the Sun, its distance estimated at 1,444–1,450 light-years from Earth. At the other extreme in size is RX J0720.4-3125, a neutron star with a radius of around 5 km. Exceedingly faint, it has an apparent magnitude of 26.6. Its spectrum and temperature appear to be mysteriously changing over several years. The nature of the changes are unclear, but it is possible they were caused by an event such as the star's absorption of an accretion disc.
Tau Canis Majoris is a Beta Lyrae-type eclipsing multiple star system that varies from magnitude 4.32 to 4.37 over 1.28 days. Its four main component stars are hot O-type stars, with a combined mass 80 times that of the Sun and shining with 500,000 times its luminosity, but little is known of their individual properties. A fifth component, a magnitude 10 star, lies at a distance of . The system is only 5 million years old. UW Canis Majoris is another Beta Lyrae-type star 3000 light-years from Earth; it is an eclipsing binary that ranges in magnitude from a minimum of 5.3 to a maximum of 4.8. It has a period of 4.4 days; its components are two massive hot blue stars, one a blue supergiant of spectral type O7.5–8 Iab, while its companion is a slightly cooler, less evolved and less luminous supergiant of spectral type O9.7Ib. The stars are 200,000 and 63,000 times as luminous as the Sun. However the fainter star is the more massive at 19 solar masses to the primary's 16. R Canis Majoris is another eclipsing binary that varies from magnitude 5.7 to 6.34 over 1.13 days, with a third star orbiting these two every 93 years. The shortness of the orbital period and the low ratio between the two main components make this an unusual Algol-type system.
Seven star systems have been found to have planets. Nu2 Canis Majoris is an ageing orange giant of spectral type K1III of apparent magnitude 3.91 located around 64 light-years distant. Around 1.5 times as massive and 11 times as luminous as the Sun, it is orbited over a period of 763 days by a planet 2.6 times as massive as Jupiter. HD 47536 is likewise an ageing orange giant found to have a planetary system—echoing the fate of the Solar System in a few billion years as the Sun ages and becomes a giant. Conversely, HD 45364 is a star 107 light-years distant that is a little smaller and cooler than the Sun, of spectral type G8V, which has two planets discovered in 2008. With orbital periods of 228 and 342 days, the planets have a 3:2 orbital resonance, which helps stabilise the system. HD 47186 is another sunlike star with two planets; the inner—HD 47186 b—takes four days to complete an orbit and has been classified as a Hot Neptune, while the outer—HD 47186 c—has an eccentric 3.7-year period orbit and has a similar mass to Saturn. HD 43197 is a sunlike star around 183 light-years distant that has two planets: a hot Jupiter-size planet with an eccentric orbit. The other planet, HD 43197 c, is another massive Jovian planet with a slightly oblong orbit outside of its habitable zone.
Z Canis Majoris is a star system a mere 300,000 years old composed of two pre-main-sequence stars—a FU Orionis star and a Herbig Ae/Be star, which has brightened episodically by two magnitudes to magnitude 8 in 1987, 2000, 2004 and 2008. The more massive Herbig Ae/Be star is enveloped in an irregular roughly spherical cocoon of dust that has an inner diameter of and outer diameter of . The cocoon has a hole in it through which light shines that covers an angle of 5 to 10 degrees of its circumference. Both stars are surrounded by a large envelope of in-falling material left over from the original cloud that formed the system. Both stars are emitting jets of material, that of the Herbig Ae/Be star being much larger—11.7 light-years long. Meanwhile, FS Canis Majoris is another star with infra-red emissions indicating a compact shell of dust, but it appears to be a main-sequence star that has absorbed material from a companion. These stars are thought to be significant contributors to interstellar dust.
Deep-sky objects.
The band of the Milky Way goes through Canis Major, with only patchy obscurement by interstellar dust clouds. It is bright in the northeastern corner of the constellation, as well as in a triangular area between Adhara, Wezen and Aludra, with many stars visible in binoculars. Canis Major boasts several open clusters. The only Messier object is M41 (NGC 2287), an open cluster with a combined visual magnitude of 4.5, around 2300 light-years from Earth. Located 4 degrees south of Sirius, it contains contrasting blue, yellow and orange stars and covers an area the apparent size of the full moon—in reality around 25 light-years in diameter. Its most luminous stars have already evolved into giants. The brightest is a 6.3-magnitude star of spectral type K3. Located in the field is 12 Canis Majoris, though this star is only 670 light-years distant. NGC 2360, known as Caroline's Cluster after its discoverer Caroline Herschel, is an open cluster located 3.5 degrees west of Muliphein and has a combined apparent magnitude of 7.2. Around 15 light-years in diameter, it is located 3700 light-years away from Earth, and has been dated to around 2.2 billion years old. NGC 2362 is a small, compact open cluster, 5200 light-years from Earth. It contains about 60 stars, of which Tau Canis Majoris is the brightest member. Located around 3 degrees northeast of Wezen, it covers an area around 12 light-years in diameter, though the stars appear huddled around Tau when seen through binoculars. It is a very young open cluster as its member stars are only a few million years old. Lying 2 degrees southwest of NGC 2362 is NGC 2354 a fainter open cluster of magnitude 6.5, with around 15 member stars visible with binoculars. Located around 30' northeast of NGC 2360, NGC 2359 (Thor's Helmet or the Duck Nebula) is a relatively bright emission nebula in Canis Major, with an approximate magnitude of 10, which is 10,000 light-years from Earth. The nebula is shaped by HD 56925, an unstable Wolf–Rayet star embedded within it.
In 2003, an overdensity of stars in the region was announced to be the Canis Major Dwarf, the closest satellite galaxy to Earth. However, there remains debate over whether it represents a disrupted dwarf galaxy or in fact a variation in the thin and thick disk and spiral arm populations of the Milky Way. Investigation of the area yielded only ten RR Lyrae variables—consistent with the Milky Way's halo and thick disk populations rather than a separate dwarf spheroidal galaxy. On the other hand, a globular cluster in Puppis, NGC 2298—which appears to be part of the Canis Major dwarf system—is extremely metal-poor, suggesting it did not arise from the Milky Way's thick disk, and instead is of extragalactic origin.
NGC 2207 and IC 2163 are a pair of face-on interacting spiral galaxies located 125 million light-years from Earth. About 40 million years ago, the two galaxies had a close encounter and are now moving farther apart; nevertheless, the smaller IC 2163 will eventually be incorporated into NGC 2207. As the interaction continues, gas and dust will be perturbed, sparking extensive star formation in both galaxies. Supernovae have been observed in NGC 2207 in 1975 (type Ia SN 1975a), 1999 (the type Ib SN 1999ec), 2003 (type 1b supernova SN 2003H), and 2013 (type II supernova SN 2013ai). Located 16 million light-years distant, ESO 489-056 is an irregular dwarf- and low-surface-brightness galaxy that has one of the lowest metallicities known.
|
6367
|
42021989
|
https://en.wikipedia.org/wiki?curid=6367
|
Canis Minor
|
Canis Minor is a small constellation in the northern celestial hemisphere. In the second century, it was included as an asterism, or pattern, of two stars in Ptolemy's 48 constellations, and it is counted among the 88 modern constellations. Its name is Latin for "lesser dog", in contrast to Canis Major, the "greater dog"; both figures are commonly represented as following the constellation of Orion the hunter.
Canis Minor contains only two stars brighter than the fourth magnitude, Procyon (Alpha Canis Minoris), with a magnitude of 0.34, and Gomeisa (Beta Canis Minoris), with a magnitude of 2.9. The constellation's dimmer stars were noted by Johann Bayer, who named eight stars including Alpha and Beta, and John Flamsteed, who numbered fourteen. Procyon is the eighth-brightest star in the night sky, as well as one of the closest. A yellow-white main-sequence star, it has a white dwarf companion. Gomeisa is a blue-white main-sequence star. Luyten's Star is a ninth-magnitude red dwarf and the Solar System's next closest stellar neighbour in the constellation after Procyon. Additionally, Procyon and Luyten's Star are only 1.12 light-years away from each other, and Procyon would be the brightest star in Luyten's Star's sky. The fourth-magnitude HD 66141, which has evolved into an orange giant towards the end of its life cycle, was discovered to have a planet in 2012. There are two faint deep-sky objects within the constellation's borders. The 11 Canis-Minorids are a meteor shower that can be seen in early December.
History and mythology.
Though strongly associated with the Classical Greek uranographic tradition, Canis Minor originates from ancient Mesopotamia. Procyon and Gomeisa were called "MASH.TAB.BA" or "twins" in the "Three Stars Each" tablets, dating to around 1100 BC. In the later "MUL.APIN", this name was also applied to the pairs of Pi3 and Pi4 Orionis and Zeta and Xi Orionis. The meaning of "MASH.TAB.BA" evolved as well, becoming the twin deities Lulal and Latarak, who are on the opposite side of the sky from "Papsukkal", the True Shepherd of Heaven in Babylonian mythology. Canis Minor was also given the name "DAR.LUGAL", its position defined as "the star which stands behind it [Orion]", in the "MUL.APIN"; the constellation represents a rooster. This name may have also referred to the constellation Lepus. "DAR.LUGAL" was also denoted "DAR.MUŠEN" and "DAR.LUGAL.MUŠEN" in Babylonia. Canis Minor was then called "tarlugallu" in Akkadian astronomy.
Canis Minor was one of the original 48 constellations formulated by Ptolemy in his second-century Almagest, in which it was defined as a specific pattern (asterism) of stars; Ptolemy identified only two stars and hence no depiction was possible. The Ancient Greeks called the constellation προκυων/"Procyon", "coming before the dog", transliterated into Latin as "Antecanis", "Praecanis", or variations thereof, by Cicero and others. Roman writers also appended the descriptors "parvus", "minor" or "minusculus" ("small" or "lesser", for its faintness), "septentrionalis" ("northerly", for its position in relation to Canis Major), "primus" (rising "first") or "sinister" (rising to the "left") to its name "Canis".
In Greek mythology, Canis Minor was sometimes connected with the Teumessian Fox, a beast turned into stone with its hunter, Laelaps, by Zeus, who placed them in heaven as Canis Major (Laelaps) and Canis Minor (Teumessian Fox). Eratosthenes accompanied the Little Dog with Orion, while Hyginus linked the constellation with Maera, a dog owned by Icarius of Athens. On discovering the latter's death, the dog and Icarius' daughter Erigone took their lives and all three were placed in the sky—Erigone as Virgo and Icarius as Boötes. As a reward for his faithfulness, the dog was placed along the "banks" of the Milky Way, which the ancients believed to be a heavenly river, where he would never suffer from thirst.
The medieval Arabic astronomers maintained the depiction of Canis Minor ("al-Kalb al-Asghar" in Arabic) as a dog; in his Book of the Fixed Stars, Abd al-Rahman al-Sufi included a diagram of the constellation with a canine figure superimposed. There was one slight difference between the Ptolemaic vision of Canis Minor and the Arabic; al-Sufi claims Mirzam, now assigned to Orion, as part of both Canis Minor—the collar of the dog—and its modern home. The Arabic names for both Procyon and Gomeisa alluded to their proximity and resemblance to Sirius, though they were not direct translations of the Greek; Procyon was called "ash-Shi'ra ash-Shamiya", the "Syrian Sirius" and Gomeisa was called "ash-Shira al-Ghamisa", the Sirius with bleary eyes. Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called "Merzem", includes the stars of Canis Minor and Canis Major and is the herald of two weeks of hot weather.
The ancient Egyptians thought of this constellation as Anubis, the jackal god.
Alternative names have been proposed: Johann Bayer in the early 17th century termed the constellation "Fovea" "The Pit", and "Morus" "Sycamine Tree". Seventeenth-century German poet and author Philippus Caesius linked it to the dog of Tobias from the Apocrypha. Richard A. Proctor gave the constellation the name "Felis" "the Cat" in 1870 (contrasting with Canis Major, which he had abbreviated to "Canis" "the Dog"), explaining that he sought to shorten the constellation names to make them more manageable on celestial charts. Occasionally, Canis Minor is confused with Canis Major and given the name "Canis Orionis" ("Orion's Dog").
In non-Western astronomy.
In Chinese astronomy, the stars corresponding to Canis Minor lie in the Vermilion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"). Procyon, Gomeisa and Eta Canis Minoris form an asterism known as Nánhé, the Southern River. With its counterpart, the Northern River Beihe (Castor and Pollux), Nánhé was also associated with a gate or sentry. Along with Zeta and 8 Cancri, 6 Canis Minoris and 11 Canis Minoris formed the asterism "Shuiwei", which literally means "water level". Combined with additional stars in Gemini, Shuiwei represented an official who managed floodwaters or a marker of the water level. Neighboring Korea recognized four stars in Canis Minor as part of a different constellation, "the position of the water". This constellation was located in the Red Bird, the southern portion of the sky.
Polynesian peoples often did not recognize Canis Minor as a constellation, but they saw Procyon as significant and often named it; in the Tuamotu Archipelago it was known as "Hiro", meaning "twist as a thread of coconut fiber", and "Kopu-nui-o-Hiro" ("great paunch of Hiro"), which was either a name for the modern figure of Canis Minor or an alternative name for Procyon. Other names included "Vena" (after a goddess), on Mangaia and "Puanga-hori" (false "Puanga", the name for Rigel), in New Zealand. In the Society Islands, Procyon was called "Ana-tahua-vahine-o-toa-te-manava", literally "Aster the priestess of brave heart", figuratively the "pillar for elocution". The Wardaman people of the Northern Territory in Australia gave Procyon and Gomeisa the names "Magum" and "Gurumana", describing them as humans who were transformed into gum trees in the Dreaming. Although their skin had turned to bark, they were able to speak with a human voice by rustling their leaves.
The Aztec calendar was related to their cosmology. The stars of Canis Minor were incorporated along with some stars of Orion and Gemini into an asterism associated with the day called "Water".
Characteristics.
Lying directly south of Gemini's bright stars Castor and Pollux, Canis Minor is a small constellation bordered by Monoceros to the south, Gemini to the north, Cancer to the northeast, and Hydra to the east. It does not border Canis Major; Monoceros is in between the two. Covering 183 square degrees, Canis Minor ranks seventy-first of the 88 constellations in size. It appears prominently in the southern sky during the Northern Hemisphere's winter. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 14 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . Most visible in the evening sky from January to March, Canis Minor is most prominent at 10 p.m. during mid-February. It is then seen earlier in the evening until July, when it is only visible after sunset before setting itself, and rising in the morning sky before dawn. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "CMi".
Features.
Stars.
Canis Minor contains only two stars brighter than fourth magnitude. At magnitude 0.34, Procyon, or Alpha Canis Minoris, is the eighth-brightest star in the night sky, as well as one of the closest. Its name means "before the dog" or "preceding the dog" in Greek, as it rises an hour before the "Dog Star", Sirius, of Canis Major. It is a binary star system, consisting of a yellow-white main-sequence star of spectral type F5 IV-V, named Procyon A, and a faint white dwarf companion of spectral type DA, named Procyon B. Procyon B, which orbits the more massive star every 41 years, is of magnitude 10.7. Procyon A is 1.4 times the Sun's mass, while its smaller companion is 0.6 times as massive as the Sun. The system is from Earth, the shortest distance to a northern-hemisphere star of the first magnitude. Gomeisa, or Beta Canis Minoris, with a magnitude of 2.89, is the second-brightest star in Canis Minor. Lying from the Solar System, it is a blue-white main-sequence star of spectral class B8 Ve. Although fainter to Earth observers, it is much brighter than Procyon, and is 250 times as luminous and three times as massive as the Sun. Although its variations are slight, Gomeisa is classified as a shell star (Gamma Cassiopeiae variable), with a maximum magnitude of 2.84 and a minimum magnitude of 2.92. It is surrounded by a disk of gas which it heats and causes to emit radiation.
Johann Bayer used the Greek letters Alpha to Eta to label the most prominent eight stars in the constellation, designating two stars as Delta (named Delta1 and Delta2). John Flamsteed numbered fourteen stars, discerning a third star he named Delta3; his star 12 Canis Minoris was not found subsequently. In Bayer's 1603 work "Uranometria", Procyon is located on the dog's belly, and Gomeisa on its neck. Gamma, Epsilon and Eta Canis Minoris lie nearby, marking the dog's neck, crown and chest, respectively. Although it has an apparent magnitude of 4.34, Gamma Canis Minoris is an orange K-type giant of spectral class K3-III C, which lies away. Its colour is obvious when seen through binoculars. It is a multiple system, consisting of the spectroscopic binary Gamma A and three optical companions, Gamma B, magnitude 13; Gamma C, magnitude 12; and Gamma D, magnitude 10. The two components of Gamma A orbit each other every 389.2 days, with an eccentric orbit that takes their separation between 2.3 and 1.4 astronomical units (AU). Epsilon Canis Minoris is a yellow bright giant of spectral class G6.5IIb of magnitude of 4.99. It lies from Earth, with 13 times the diameter and 750 times the luminosity of the Sun. Eta Canis Minoris is a giant of spectral class F0III of magnitude 5.24, which has a yellowish hue when viewed through binoculars as well as a faint companion of magnitude 11.1. Located 4 arcseconds from the primary, the companion star is actually around 440 AU from the main star and takes around 5,000 years to orbit it.
Near Procyon, three stars share the name Delta Canis Minoris. Delta1 is a yellow-white F-type giant of magnitude 5.25 located around from Earth. About 360 times as luminous and 3.75 times as massive as the Sun, it is expanding and cooling as it ages, having spent much of its life as a main sequence star of spectrum B6V. Also known as 8 Canis Minoris, Delta2 is an F-type main-sequence star of spectral type F2V and magnitude 5.59 which is distant. The last of the trio, Delta3 (also known as 9 Canis Minoris), is a white main sequence star of spectral type A0Vnn and magnitude 5.83 which is distant. These stars mark the paws of the Lesser Dog's left hind leg, while magnitude 5.13 Zeta marks the right. A blue-white bright giant of spectral type B8II, Zeta lies around away from the Solar System.
Lying 222 ± 7 light-years away with an apparent magnitude of 4.39, HD 66141 is 6.8 billion years old and has evolved into an orange giant of spectral type K2III with a diameter around 22 times that of the Sun, and weighing 1.1 solar masses. It is 174 times as luminous as the Sun, with an absolute magnitude of −0.15. HD 66141 was mistakenly named 13 Puppis, as its celestial coordinates were recorded incorrectly when catalogued and hence mistakenly thought to be in the constellation of Puppis; Bode gave it the name Lambda Canis Minoris, which is now obsolete. The orange giant is orbited by a planet, HD 66141b, which was detected in 2012 by measuring the star's radial velocity. The planet has a mass around 6 times that of Jupiter and a period of 480 days.
A red giant of spectral type M4III, BC Canis Minoris lies around distant from the Solar System. It is a semiregular variable star that varies between a maximum magnitude of 6.14 and minimum magnitude of 6.42. Periods of 27.7, 143.3 and 208.3 days have been recorded in its pulsations. AZ, AD and BI Canis Minoris are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. AZ is of spectral type A5IV, and ranges between magnitudes 6.44 and 6.51 over a period of 2.3 hours. AD has a spectral type of F2III, and has a maximum magnitude of 9.21 and minimum of 9.51, with a period of approximately 2.95 hours. BI is of spectral type F2 with an apparent magnitude varying around 9.19 and a period of approximately 2.91 hours.
At least three red giants are Mira variables in Canis Minor. S Canis Minoris, of spectral type M7e, is the brightest, ranging from magnitude 6.6 to 13.2 over a period of 332.94 days. V Canis Minoris ranges from magnitude 7.4 to 15.1 over a period of 366.1 days. Similar in magnitude is R Canis Minoris, which has a maximum of 7.3, but a significantly brighter minimum of 11.6. An S-type star, it has a period of 337.8 days.
YZ Canis Minoris is a red dwarf of spectral type M4.5V and magnitude 11.2, roughly three times the size of Jupiter and from Earth. It is a flare star, emitting unpredictable outbursts of energy for mere minutes, which might be much more powerful analogues of solar flares. Luyten's Star (GJ 273) is a red dwarf star of spectral type M3.5V and close neighbour of the Solar System. Its visual magnitude of 9.9 renders it too faint to be seen with the naked eye, even though it is only away. Fainter still is PSS 544-7, an eighteenth-magnitude red dwarf around 20 per cent the mass of the Sun, located from Earth. First noticed in 1991, it is thought to be a cannonball star, shot out of a star cluster and now moving rapidly through space directly away from the galactic disc.
The WZ Sagittae-type dwarf nova DY Canis Minoris (also known as VSX J074727.6+065050) flared up to magnitude 11.4 over January and February 2008 before dropping eight magnitudes to around 19.5 over approximately 80 days. It is a remote binary star system where a white dwarf and low-mass star orbit each other close enough for the former star to draw material off the latter and form an accretion disc. This material builds up until it erupts dramatically.
Deep-sky objects.
The Milky Way passes through much of Canis Minor, yet it has few deep-sky objects. William Herschel recorded four objects in his 1786 work "Catalogue of Nebulae and Clusters of Stars", including two he mistakenly believed were star clusters. NGC 2459 is a group of five thirteenth- and fourteenth-magnitude stars that appear to lie close together in the sky but are not related. A similar situation has occurred with NGC 2394, also in Canis Minor. This is a collection of fifteen unrelated stars of ninth magnitude and fainter.
Herschel also observed three faint galaxies, two of which are interacting with each other. NGC 2508 is a lenticular galaxy of thirteenth magnitude, estimated at 205 million light-years' distance (63 million parsecs) with a diameter of . Named as a single object by Herschel, NGC 2402 is actually a pair of near-adjacent galaxies that appear to be interacting with each other. Only of fourteenth and fifteenth magnitudes, respectively, the elliptical and spiral galaxy are thought to be approximately 245 million light-years distant, and each measure 55,000 light-years in diameter.
Meteor showers.
The 11 Canis-Minorids, also called the Beta Canis Minorids, are a meteor shower that arise near the fifth-magnitude star 11 Canis Minoris and were discovered in 1964 by Keith Hindley, who investigated their trajectory and proposed a common origin with the comet D/1917 F1 Mellish. However, this conclusion has been refuted subsequently as the number of orbits analysed was low and their trajectories too disparate to confirm a link. They last from 4 to 15 December, peaking over 10 and 11 December.
|
6371
|
13579358
|
https://en.wikipedia.org/wiki?curid=6371
|
Centaurus
|
Centaurus () is a bright constellation in the southern sky. One of the largest constellations, Centaurus was included among the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. In Greek mythology, Centaurus represents a centaur; a creature that is half human, half horse (another constellation named after a centaur is one from the zodiac: Sagittarius). Notable stars include Alpha Centauri, the nearest star system to the Solar System, its neighbour in the sky Beta Centauri, and HR 5171, one of the largest stars yet discovered. The constellation also contains Omega Centauri, the brightest globular cluster as visible from Earth and the largest identified in the Milky Way, possibly a remnant of a dwarf galaxy.
Notable features.
Stars.
Centaurus contains several very bright stars. Its alpha and beta stars are used as "pointer stars" to help observers find the constellation Crux. Centaurus has 281 stars above magnitude 6.5, meaning that they are visible to the unaided eye, the most of any constellation. Alpha Centauri, the closest star system to the Sun, has a high proper motion; it will be a mere half-degree from Beta Centauri in approximately 4000 years.
Alpha Centauri is a triple star system composed of a binary system orbited by Proxima Centauri, currently the nearest star to the Sun. Traditionally called Rigil Kentaurus (from Arabic رجل قنطورس, meaning "foot of the centaur") or Toliman (from Arabic الظليمين meaning "two male ostriches"), the system has an overall magnitude of −0.28 and is 4.4 light-years from Earth. The primary and secondary are both yellow-hued stars; the first is of magnitude −0.01 and the second: 1.35. Proxima, the tertiary star, is a red dwarf of magnitude 11.0; it appears almost 2 degrees away from the close pairing of Alpha and has a period of approximately one million years. Also a flare star, Proxima has minutes-long outbursts where it brightens by over a magnitude. The Alpha couple revolve in 80-year periodicity and will next appear closest as seen from Earth's telescopes in 2037 and 2038, together as they appear to the naked eye they present the third-brightest "star" in the night sky.
One other first magnitude star Beta Centauri is in the constellation in a position beyond Proxima and toward the narrow axis of Crux, thus with Alpha forming a far-south limb of the constellation. Also called Hadar and Agena, it is a double star; the primary is a blue-hued giant star of magnitude 0.6, 525 light-years from Earth. The secondary is of magnitude 4.0 and has a modest separation, appearing only under intense magnification due to its distance.
The northerly star Theta Centauri, officially named Menkent, is an orange giant star of magnitude 2.06. It is the only bright star of Centaurus that is easily visible from mid-northern latitudes.
The next bright object is Gamma Centauri, a binary star which appears to the naked eye at magnitude 2.2. The primary and secondary are both blue-white hued stars of magnitude 2.9; their period is 84 years.
Centaurus also has many dimmer double stars and binary stars. 3 Centauri is a double star with a blue-white hued primary of magnitude 4.5 and a secondary of magnitude 6.0. The primary is 344 light-years away.
Centaurus is home to many variable stars. R Centauri is a Mira variable star with a minimum magnitude of 11.8 and a maximum magnitude of 5.3; it is about 1,250 light-years from Earth and has a period of 18 months. V810 Centauri is a semiregular variable.
BPM 37093 is a white dwarf star whose carbon atoms are thought to have formed a crystalline structure. Since diamond also consists of carbon arranged in a crystalline lattice (though of a different configuration), scientists have nicknamed this star "Lucy" after the Beatles song ""Lucy in the Sky with Diamonds"."
PDS 70, (V1032 Centauri) a low mass T Tauri star is found in the constellation Centaurus. In July 2018 astronomers captured the first conclusive image of a protoplanetary disk containing a nascent exoplanet, named PDS 70b.
Deep-sky objects.
ω Centauri (NGC 5139), despite being listed as the constellation's "omega" star, is in fact a naked-eye globular cluster, 17,000 light-years away with a diameter of 150 light-years. It is the largest and brightest globular cluster in the Milky Way; at ten times the size of the next-largest cluster, it has a magnitude of 3.7. It is also the most luminous globular cluster in the Milky Way, at over one million solar luminosities. Omega Centauri is classified as a Shapley class VIII cluster, which means that its center is loosely concentrated. It is also one of only two globular clusters to be given a stellar designation; in its case a Bayer letter. The other is 47 Tucanae (Xi Tucanae), which has a Flamsteed number. Omega Centauri contains several million stars, most of which are yellow dwarf stars, but also possesses red giants and blue-white stars; the stars have an average age of 12 billion years. This has prompted suspicion that Omega Centauri was the core of a dwarf galaxy that had been absorbed by the Milky Way. Omega Centauri was determined to be nonstellar in 1677 by the English astronomer Edmond Halley, though it was visible as a star to the ancients. Its status as a globular cluster was determined by James Dunlop in 1827. To the unaided eye, Omega Centauri appears fuzzy and is obviously non-circular; it is approximately half a degree in diameter, the same size as the full Moon.
Centaurus is also home to open clusters. NGC 3766 is an open cluster 6,300 light-years from Earth that is visible to the unaided eye. It contains approximately 100 stars, the brightest of which are 7th magnitude. NGC 5460 is another naked-eye open cluster, 2,300 light-years from Earth, that has an overall magnitude of 6 and contains approximately 40 stars.
There is one bright planetary nebula in Centaurus, NGC 3918, also known as the Blue Planetary. It has an overall magnitude of 8.0 and a central star of magnitude 11.0; it is 2600 light-years from Earth. The Blue Planetary was discovered by John Herschel and named for its color's similarity to Uranus, though the nebula is apparently three times larger than the planet.
Centaurus is rich in galaxies as well. NGC 4622 is a face-on spiral galaxy located 200 million light-years from Earth (redshift 0.0146). Its spiral arms wind in both directions, which makes it nearly impossible for astronomers to determine the rotation of the galaxy. Astronomers theorize that a collision with a smaller companion galaxy near the core of the main galaxy could have led to the unusual spiral structure. NGC 5253, a peculiar irregular galaxy, is located near the border with Hydra and M83, with which it likely had a close gravitational interaction 1–2 billion years ago. This may have sparked the galaxy's high rate of star formation, which continues today and contributes to its high surface brightness. NGC 5253 includes a large nebula and at least 12 large star clusters. In the eyepiece, it is a small galaxy of magnitude 10 with dimensions of 5 arcminutes by 2 arcminutes and a bright nucleus. NGC 4945 is a spiral galaxy seen edge-on from Earth, 13 million light-years away. It is visible with any amateur telescope, as well as binoculars under good conditions; it has been described as "shaped like a candle flame", being long and thin (16' by 3'). In the eyepiece of a large telescope, its southeastern dust lane becomes visible. Another galaxy is NGC 5102, found by star-hopping from Iota Centauri. In the eyepiece, it appears as an elliptical object 9 arcminutes by 2.5 arcminutes tilted on a southwest–northeast axis.
One of the closest active galaxies to Earth is the Centaurus A galaxy, NGC 5128, at 11 million light-years away (redshift 0.00183). It has a supermassive black hole at its core, which expels massive jets of matter that emit radio waves due to synchrotron radiation. Astronomers posit that its dust lanes, not common in elliptical galaxies, are due to a previous merger with another galaxy, probably a spiral galaxy. NGC 5128 appears in the optical spectrum as a fairly large elliptical galaxy with a prominent dust lane. Its overall magnitude is 7.0 and it has been seen under perfect conditions with the naked eye, making it one of the most distant objects visible to the unaided observer. In equatorial and southern latitudes, it is easily found by star hopping from Omega Centauri. In small telescopes, the dust lane is not visible; it begins to appear with about 4 inches of aperture under good conditions. In large amateur instruments, above about 12 inches in aperture, the dust lane's west-northwest to east-southeast direction is easily discerned. Another dim dust lane on the east side of the 12-arcminute-by-15-arcminute galaxy is also visible. ESO 270-17, also called the Fourcade-Figueroa Object, is a low-surface brightness object believed to be the remnants of a galaxy; it does not have a core and is very difficult to observe with an amateur telescope. It measures 7 arcminutes by 1 arcminute. It likely originated as a spiral galaxy and underwent a catastrophic gravitational interaction with Centaurus A around 500 million years ago, stopping its rotation and destroying its structure.
NGC 4650A is a polar-ring galaxy 136 million light-years from Earth (redshift 0.01). It has a central core made of older stars that resembles an elliptical galaxy, and an outer ring of young stars that orbits around the core. The plane of the outer ring is distorted, which suggests that NGC 4650A is the result of a galaxy collision about a billion years ago. This galaxy has also been cited in studies of dark matter, because the stars in the outer ring orbit too quickly for their collective mass. This suggests that the galaxy is surrounded by a dark matter halo, which provides the necessary mass.
One of the closest galaxy clusters to Earth is the Centaurus Cluster at 160 million light-years away, having redshift 0.0114. It has a cooler, denser central region of gas and a hotter, more diffuse outer region. The intracluster medium in the Centaurus Cluster has a high concentration of metals (elements heavier than helium) due to a large number of supernovae. This cluster also possesses a plume of gas whose origin is unknown.
History.
While Centaurus now has a high southern latitude, at the dawn of civilization it was an equatorial constellation. Precession has been slowly shifting it southward for millennia, and it is now close to its maximal southern declination. In a little over 7000 years it will be at maximum visibility for those in the northern hemisphere, visible at times in the year up to quite a high northern latitude.
The figure of Centaurus can be traced back to a Babylonian constellation known as the Bison-man (MUL.GUD.ALIM). This being was depicted in two major forms: firstly, as a 4-legged bison with a human head, and secondly, as a being with a man's head and torso attached to the rear legs and tail of a bull or bison. It has been closely associated with the Sun god Utu-Shamash from very early times.
The Greeks depicted the constellation as a centaur and gave it its current name. It was mentioned by Eudoxus in the 4th century BC and Aratus in the 3rd century BC. In the 2nd century AD, Claudius Ptolemy catalogued 37 stars in Centaurus, including Alpha Centauri. Large as it is now, in earlier times it was even larger, as the constellation Lupus was treated as an asterism within Centaurus, portrayed in illustrations as an unspecified animal either in the centaur's grasp or impaled on its spear. The Southern Cross, which is now regarded as a separate constellation, was treated by the ancients as a mere asterism formed of the stars composing the centaur's legs. Additionally, what is now the minor constellation Circinus was treated as undefined stars under the centaur's front hooves.
According to the Roman poet Ovid ("Fasti" v.379), the constellation honors the centaur Chiron, who was tutor to many of the earlier Greek heroes including Heracles (Hercules), Theseus, and Jason, the leader of the Argonauts. It is not to be confused with the more warlike centaur represented by the zodiacal constellation Sagittarius. The legend associated with Chiron says that he was accidentally poisoned with an arrow shot by Hercules, and was subsequently placed in the heavens.
Equivalents.
In Chinese astronomy, the stars of Centaurus are found in three areas: the Azure Dragon of the East (東方青龍, "Dōng Fāng Qīng Lóng"), the Vermillion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"), and the Southern Asterisms (近南極星區, "Jìnnánjíxīngōu"). Not all of the stars of Centaurus can be seen from China, and the unseen stars were classified among the Southern Asterisms by Xu Guangqi, based on his study of western star charts. However, most of the brightest stars of Centaurus, including α Centauri, θ Centauri (or Menkent), ε Centauri and η Centauri, can be seen in the Chinese sky.
Some Polynesian peoples considered the stars of Centaurus to be a constellation as well. On Pukapuka, Centaurus had two names: "Na Mata-o-te-tokolua" and "Na Lua-mata-o-Wua-ma-Velo". In Tonga, the constellation was called by four names: "O-nga-tangata", "Tautanga-ufi", "Mamangi-Halahu", and "Mau-kuo-mau". Alpha and Beta Centauri were not named specifically by the people of Pukapuka or Tonga, but they were named by the people of Hawaii and the Tuamotus. In Hawaii, the name for Alpha Centauri was either "Melemele" or "Ka Maile-hope" and the name for Beta Centauri was either "Polapola" or "Ka Maile-mua". In the Tuamotu islands, Alpha was called "Na Kuhi" and Beta was called "Tere".
The Pointer (α Centauri and β Centauri) is one of the asterisms used by Bugis sailors for navigation, called "bintoéng balué", meaning "the widowed-before-marriage". It is also called "bintoéng sallatang" meaning "southern star".
Namesakes.
Two United States Navy ships, and , were named after Centaurus, the constellation.
|
6416
|
49176781
|
https://en.wikipedia.org/wiki?curid=6416
|
Impact crater
|
An impact crater is a depression in the surface of a solid astronomical body formed by the hypervelocity impact of a smaller object. In contrast to volcanic craters, which result from explosion or internal collapse, impact craters typically have raised rims and floors that are lower in elevation than the surrounding terrain. Impact craters are typically circular, though they can be elliptical in shape or even irregular due to events such as landslides. Impact craters range in size from microscopic craters seen on lunar rocks returned by the Apollo Program to simple bowl-shaped depressions and vast, complex, multi-ringed impact basins. Meteor Crater is a well-known example of a small impact crater on Earth.
Impact craters are the dominant geographic features on many solid Solar System objects including the Moon, Mercury, Callisto, Ganymede, and most small moons and asteroids. On other planets and moons that experience more active surface geological processes, such as Earth, Venus, Europa, Io, Titan, and Triton, visible impact craters are less common because they become eroded, buried, or transformed by tectonic and volcanic processes over time. Where such processes have destroyed most of the original crater topography, the terms impact structure or astrobleme are more commonly used. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth.
The cratering records of very old surfaces, such as Mercury, the Moon, and the southern highlands of Mars, record a period of intense early bombardment in the inner Solar System around 3.9 billion years ago. The rate of crater production on Earth has since been considerably lower, but it is appreciable nonetheless. Earth experiences, on average, from one to three impacts large enough to produce a crater every million years. This indicates that there should be far more relatively young craters on the planet than have been discovered so far. The cratering rate in the inner solar system fluctuates as a consequence of collisions in the asteroid belt that create a family of fragments that are often sent cascading into the inner solar system. Formed in a collision 80 million years ago, the Baptistina family of asteroids is thought to have caused a large spike in the impact rate. The rate of impact cratering in the outer Solar System could be different from the inner Solar System.
Although Earth's active surface processes quickly destroy the impact record, about 190 terrestrial impact craters have been identified. These range in diameter from a few tens of meters up to about , and they range in age from recent times (e.g. the Sikhote-Alin craters in Russia whose creation was witnessed in 1947) to more than two billion years, though most are less than 500 million years old because geological processes tend to obliterate older craters. They are also selectively found in the stable interior regions of continents. Few undersea craters have been discovered because of the difficulty of surveying the sea floor, the rapid rate of change of the ocean bottom, and the subduction of the ocean floor into Earth's interior by processes of plate tectonics.
History.
Daniel M. Barringer, a mining engineer, was convinced already in 1903 that the crater he owned, Meteor Crater, was of cosmic origin. Most geologists at the time assumed it formed as the result of a volcanic steam eruption.
In the 1920s, the American geologist Walter H. Bucher studied a number of sites now recognized as impact craters in the United States. He concluded they had been created by some great explosive event, but believed that this force was probably volcanic in origin. However, in 1936, the geologists John D. Boon and Claude C. Albritton Jr. revisited Bucher's studies and concluded that the craters that he studied were probably formed by impacts.
Grove Karl Gilbert suggested in 1893 that the Moon's craters were formed by large asteroid impacts. Ralph Baldwin in 1949 wrote that the Moon's craters were mostly of impact origin. Around 1960, Gene Shoemaker revived the idea. According to David H. Levy, Shoemaker "saw the craters on the Moon as logical impact sites that were formed not gradually, in eons, but explosively, in seconds." For his PhD degree at Princeton University (1960), under the guidance of Harry Hammond Hess, Shoemaker studied the impact dynamics of Meteor Crater. Shoemaker noted that Meteor Crater had the same form and structure as two explosion craters created from atomic bomb tests at the Nevada Test Site, notably Jangle U in 1951 and Teapot Ess in 1955. In 1960, Edward C. T. Chao and Shoemaker identified coesite (a form of silicon dioxide) at Meteor Crater, proving the crater was formed from an impact generating extremely high temperatures and pressures. They followed this discovery with the identification of coesite within suevite at Nördlinger Ries, proving its impact origin.
Armed with the knowledge of shock-metamorphic features, Carlyle S. Beals and colleagues at the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada and Wolf von Engelhardt of the University of Tübingen in Germany began a methodical search for impact craters. By 1970, they had tentatively identified more than 50. Although their work was controversial, the American Apollo Moon landings, which were in progress at the time, provided supportive evidence by recognizing the rate of impact cratering on the Moon. Because the processes of erosion on the Moon are minimal, craters persist. Since the Earth could be expected to have roughly the same cratering rate as the Moon, it became clear that the Earth had suffered far more impacts than could be seen by counting evident craters.
Crater formation.
Impact cratering involves high velocity collisions between solid objects, typically much greater than the speed of sound in those objects. Such hyper-velocity impacts produce physical effects such as melting and vaporization that do not occur in familiar sub-sonic collisions. On Earth, ignoring the slowing effects of travel through the atmosphere, the lowest impact velocity with an object from space is equal to the gravitational escape velocity of about 11 km/s. The fastest impacts occur at about 72 km/s in the "worst case" scenario in which an object in a retrograde near-parabolic orbit hits Earth. The median impact velocity on Earth is about 20 km/s.
However, the slowing effects of travel through the atmosphere rapidly decelerate any potential impactor, especially in the lowest 12 kilometres where 90% of the Earth's atmospheric mass lies. Meteors of up to 7,000 kg lose all their cosmic velocity due to atmospheric drag at a certain altitude (retardation point), and start to accelerate again due to Earth's gravity until the body reaches its terminal velocity of 0.09 to 0.16 km/s. The larger the meteoroid (i.e. asteroids and comets) the more of its initial cosmic velocity it preserves. While an object of 9,000 kg maintains about 6% of its original velocity, one of 900,000 kg already preserves about 70%. Extremely large bodies (about 100,000 tonnes) are not slowed by the atmosphere at all, and impact with their initial cosmic velocity if no prior disintegration occurs.
Impacts at these high speeds produce shock waves in solid materials, and both impactor and the material impacted are rapidly compressed to high density. Following initial compression, the high-density, over-compressed region rapidly depressurizes, exploding violently, to set in train the sequence of events that produces the impact crater. Impact-crater formation is therefore more closely analogous to cratering by high explosives than by mechanical displacement. Indeed, the energy density of some material involved in the formation of impact craters is many times higher than that generated by high explosives. Since craters are caused by explosions, they are nearly always circular – only very low-angle impacts cause significantly elliptical craters.
This describes impacts on solid surfaces. Impacts on porous surfaces, such as that of Hyperion, may produce internal compression without ejecta, punching a hole in the surface without filling in nearby craters. This may explain the 'sponge-like' appearance of that moon.
It is convenient to divide the impact process conceptually into three distinct stages: (1) initial contact and compression, (2) excavation, (3) modification and collapse. In practice, there is overlap between the three processes with, for example, the excavation of the crater continuing in some regions while modification and collapse is already underway in others.
Contact and compression.
In the absence of atmosphere, the impact process begins when the impactor first touches the target surface. This contact accelerates the target and decelerates the impactor. Because the impactor is moving so rapidly, the rear of the object moves a significant distance during the short-but-finite time taken for the deceleration to propagate across the impactor. As a result, the impactor is compressed, its density rises, and the pressure within it increases dramatically. Peak pressures in large impacts exceed 1 T Pa to reach values more usually found deep in the interiors of planets, or generated artificially in nuclear explosions.
In physical terms, a shock wave originates from the point of contact. As this shock wave expands, it decelerates and compresses the impactor, and it accelerates and compresses the target. Stress levels within the shock wave far exceed the strength of solid materials; consequently, both the impactor and the target close to the impact site are irreversibly damaged. Many crystalline minerals can be transformed into higher-density phases by shock waves; for example, the common mineral quartz can be transformed into the higher-pressure forms coesite and stishovite. Many other shock-related changes take place within both impactor and target as the shock wave passes through, and some of these changes can be used as diagnostic tools to determine whether particular geological features were produced by impact cratering.
As the shock wave decays, the shocked region decompresses towards more usual pressures and densities. The damage produced by the shock wave raises the temperature of the material. In all but the smallest impacts this increase in temperature is sufficient to melt the impactor, and in larger impacts to vaporize most of it and to melt large volumes of the target. As well as being heated, the target near the impact is accelerated by the shock wave, and it continues moving away from the impact behind the decaying shock wave.
Excavation.
Contact, compression, decompression, and the passage of the shock wave all occur within a few tenths of a second for a large impact. The subsequent excavation of the crater occurs more slowly, and during this stage the flow of material is largely subsonic. During excavation, the crater grows as the accelerated target material moves away from the point of impact. The target's motion is initially downwards and outwards, but it becomes outwards and upwards. The flow initially produces an approximately hemispherical cavity that continues to grow, eventually producing a paraboloid (bowl-shaped) crater in which the centre has been pushed down, a significant volume of material has been ejected, and a topographically elevated crater rim has been pushed up. When this cavity has reached its maximum size, it is called the transient cavity.
The depth of the transient cavity is typically a quarter to a third of its diameter. Ejecta thrown out of the crater do not include material excavated from the full depth of the transient cavity; typically the depth of maximum excavation is only about a third of the total depth. As a result, about one third of the volume of the transient crater is formed by the ejection of material, and the remaining two thirds is formed by the displacement of material downwards, outwards and upwards, to form the elevated rim. For impacts into highly porous materials, a significant crater volume may also be formed by the permanent compaction of the pore space. Such compaction craters may be important on many asteroids, comets and small moons.
In large impacts, as well as material displaced and ejected to form the crater, significant volumes of target material may be melted and vaporized together with the original impactor. Some of this impact melt rock may be ejected, but most of it remains within the transient crater, initially forming a layer of impact melt coating the interior of the transient cavity. In contrast, the hot dense vaporized material expands rapidly out of the growing cavity, carrying some solid and molten material within it as it does so. As this hot vapor cloud expands, it rises and cools much like the archetypal mushroom cloud generated by large nuclear explosions. In large impacts, the expanding vapor cloud may rise to many times the scale height of the atmosphere, effectively expanding into free space.
Most material ejected from the crater is deposited within a few crater radii, but a small fraction may travel large distances at high velocity, and in large impacts it may exceed escape velocity and leave the impacted planet or moon entirely. The majority of the fastest material is ejected from close to the center of impact, and the slowest material is ejected close to the rim at low velocities to form an overturned coherent flap of ejecta immediately outside the rim. As ejecta escapes from the growing crater, it forms an expanding curtain in the shape of an inverted cone. The trajectory of individual particles within the curtain is thought to be largely ballistic.
Small volumes of un-melted and relatively un-shocked material may be spalled at very high relative velocities from the surface of the target and from the rear of the impactor. Spalling provides a potential mechanism whereby material may be ejected into inter-planetary space largely undamaged, and whereby small volumes of the impactor may be preserved undamaged even in large impacts. Small volumes of high-speed material may also be generated early in the impact by jetting. This occurs when two surfaces converge rapidly and obliquely at a small angle, and high-temperature highly shocked material is expelled from the convergence zone with velocities that may be several times larger than the impact velocity.
Modification and collapse.
In most circumstances, the transient cavity is not stable and collapses under gravity. In small craters, less than about 4 km diameter on Earth, there is some limited collapse of the crater rim coupled with debris sliding down the crater walls and drainage of impact melts into the deeper cavity. The resultant structure is called a simple crater, and it remains bowl-shaped and superficially similar to the transient crater. In simple craters, the original excavation cavity is overlain by a lens of collapse breccia, ejecta and melt rock, and a portion of the central crater floor may sometimes be flat.
Above a certain threshold size, which varies with planetary gravity, the collapse and modification of the transient cavity is much more extensive, and the resulting structure is called a complex crater. The collapse of the transient cavity is driven by gravity, and involves both the uplift of the central region and the inward collapse of the rim. The central uplift is not the result of elastic rebound, which is a process in which a material with elastic strength attempts to return to its original geometry; rather the collapse is a process in which a material with little or no strength attempts to return to a state of gravitational equilibrium.
Complex craters have uplifted centers, and they have typically broad flat shallow crater floors, and terraced walls. At the largest sizes, one or more exterior or interior rings may appear, and the structure may be labeled an impact basin rather than an impact crater. Complex-crater morphology on rocky planets appears to follow a regular sequence with increasing size: small complex craters with a central topographic peak are called central peak craters, for example Tycho; intermediate-sized craters, in which the central peak is replaced by a ring of peaks, are called peak-ring craters, for example Schrödinger; and the largest craters contain multiple concentric topographic rings, and are called multi-ringed basins, for example Orientale. On icy (as opposed to rocky) bodies, other morphological forms appear that may have central pits rather than central peaks, and at the largest sizes may contain many concentric rings. Valhalla on Callisto is an example of this type.
Subsequent modification.
Long after an impact event, a crater may be further modified by erosion, mass wasting processes, viscous relaxation, or erased entirely. These effects are most prominent on geologically and meteorologically active bodies such as Earth, Titan, Triton, and Io. However, heavily modified craters may be found on more primordial bodies such as Callisto, where many ancient craters flatten into bright ghost craters, or palimpsests.
Identifying impact craters.
Non-explosive volcanic craters can usually be distinguished from impact craters by their irregular shape and the association of volcanic flows and other volcanic materials. Impact craters produce melted rocks as well, but usually in smaller volumes with different characteristics.
The distinctive mark of an impact crater is the presence of rock that has undergone shock-metamorphic effects, such as shatter cones, melted rocks, and crystal deformations. The problem is that these materials tend to be deeply buried, at least for simple craters. They tend to be revealed in the uplifted center of a complex crater, however.
Impacts produce distinctive shock-metamorphic effects that allow impact sites to be distinctively identified. Such shock-metamorphic effects can include:
Economic importance.
On Earth, impact craters have resulted in useful minerals. Some of the ores produced from impact related effects on Earth include ores of iron, uranium, gold, copper, and nickel. It is estimated that the value of materials mined from impact structures is five billion dollars/year just for North America. The eventual usefulness of impact craters depends on several factors, especially the nature of the materials that were impacted and when the materials were affected. In some cases, the deposits were already in place and the impact brought them to the surface. These are called "progenetic economic deposits." Others were created during the actual impact. The great energy involved caused melting. Useful minerals formed as a result of this energy are classified as "syngenetic deposits." The third type, called "epigenetic deposits," is caused by the creation of a basin from the impact. Many of the minerals that our modern lives depend on are associated with impacts in the past. The Vredeford Dome in the center of the Witwatersrand Basin is the largest goldfield in the world, which has supplied about 40% of all the gold ever mined in an impact structure (though the gold did not come from the bolide). The asteroid that struck the region was wide. The Sudbury Basin was caused by an impacting body over in diameter. This basin is famous for its deposits of nickel, copper, and platinum group elements. An impact was involved in making the Carswell structure in Saskatchewan, Canada; it contains uranium deposits.
Hydrocarbons are common around impact structures. Fifty percent of impact structures in North America in hydrocarbon-bearing sedimentary basins contain oil/gas fields.
Lists of craters.
Impact craters on Earth.
On Earth, the recognition of impact craters is a branch of geology, and is related to planetary geology in the study of other worlds. Out of many proposed craters, relatively few are confirmed. The following twenty are a sample of articles of confirmed and well-documented impact sites.
See the Earth Impact Database, a website concerned with 190 () scientifically confirmed impact craters on Earth.
Largest named craters in the Solar System.
There are approximately twelve more impact craters/basins larger than 300 km on the Moon, five on Mercury, and four on Mars. Large basins, some unnamed but mostly smaller than 300 km, can also be found on Saturn's moons Dione, Rhea and Iapetus.
|
6420
|
42021989
|
https://en.wikipedia.org/wiki?curid=6420
|
Corona Borealis
|
Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by her in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den or a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern.
The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946, and is predicted to do the same in 2025. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five stars in the constellation host Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster.
Characteristics.
Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the IAU designated constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S. It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrB". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere.
Features.
Stars.
The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas "Uranometria". Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta1 and Zeta2. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu1 and Nu2 respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5.
Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun () and 57 times as luminous (), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space.
Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around , 2.6 solar radii (), and . The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around , , and between 4 and . Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk.
Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around and has swollen to . It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core.
Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu1 Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu2 Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries.
Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear chain reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis started dimming in March 2023 and it is known that before it goes nova it dims for about a year; for this reason it was initially expected to go nova at any time between March and September, 2024. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 μm) ejected periodically.
There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days
TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star.
Extrasolar planetary systems.
Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter () orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a orange giant of spectral type K2III that has swollen to and . Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lies a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at . The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of . Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days.
The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001.
Deep-sky objects.
Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away. Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters.
Mythology.
In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternative version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. , attributed to Hyginus, linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Its proximity to the constellations Hercules (which reports was once attributed to Theseus, among others) and Lyra (Theseus' lyre in one account) could indicate that the three constellations were invented as a group. Corona Borealis was one of the 48 constellations mentioned in the "Almagest" of classical astronomer Ptolemy.
In Mesopotamia, Corona Borealis was associated with the goddess Nanaya.
In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as "Darželis", the "flower garden".
The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" ( '), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as ' (), or "the dish/bowl of the poor people".
The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the "Heavenly Sisters", who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as "Mskegwǒm", the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris).
Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it "Na Kaua-ki-tokerau" and probably "Te Hetu". The constellation was likely called "Kaua-mea" in Hawaii, "Rangawhenua" in New Zealand, and "Te Wale-o-Awitu" in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called "Ao-o-Uvea" or "Kau-kupenga".
In Australian Aboriginal astronomy, the constellation is called "womera" ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as "mullion wollai" "eagle's nest", with Altair and Vega—each called "mullion"—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence.
Later references.
Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas "Mercurii Philosophicii Firmamentum Firminianum Descriptionem" by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the short story. Finnish band Cadacross released an album titled "Corona Borealis" in 2002.
|
6421
|
42021989
|
https://en.wikipedia.org/wiki?curid=6421
|
Cygnus (constellation)
|
Cygnus is a northern constellation on the plane of the Milky Way, deriving its name from the Latinized Greek word for swan. Cygnus is one of the most recognizable constellations of the northern summer and autumn, and it features a prominent asterism known as the Northern Cross (in contrast to the Southern Cross). Cygnus was among the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations.
Cygnus contains Deneb (ذنب, translit. "ḏanab," tail)one of the brightest stars in the night sky and the most distant first-magnitude staras its "tail star" and one corner of the Summer Triangle the constellation forming an east pointing altitude of the triangle. It also has some notable X-ray sources and the giant stellar association of Cygnus OB2. One of the stars of this association, NML Cygni, is one of the largest stars currently known. The constellation is also home to Cygnus X-1, a distant X-ray binary containing a supergiant and unseen massive companion that was the first object widely held to be a black hole.
Many star systems in Cygnus have known planets as a result of the Kepler Mission observing one patch of the sky, an area around Cygnus.
Most of the east has part of the Hercules–Corona Borealis Great Wall in the deep sky, a giant galaxy filament that is the largest known structure in the observable universe, covering most of the northern sky.
History and mythology.
In Eastern and World astronomy.
In Polynesia, Cygnus was often recognized as a separate constellation. In Tonga it was called "Tuula-lupe", and in the Tuamotus it was called "Fanui-tai". In New Zealand it was called "Mara-tea", in the Society Islands it was called "Pirae-tea" or "Taurua-i-te-haapa-raa-manu", and in the Tuamotus it was called "Fanui-raro". Beta Cygni was named in New Zealand; it was likely called "Whetu-kaupo". Gamma Cygni was called "Fanui-runga" in the Tuamotus.
Whilst being represented as a swan in the west, the constellation is known as ad-Dajājah in Arabic, meaning "the hen". Cygnus's brightest star, known in the western world as "deneb", gains it's name from the Arabic name "dhaneb", meaning "tail", from the phrase "Dhanab ad-Dajājah" or the tail of the hen.
In Western astronomy.
In Greek mythology, Cygnus has been identified with several different legendary swans. Zeus disguised himself as a swan to seduce Leda, Spartan king Tyndareus's wife, who gave birth to the Gemini, Helen of Troy, and Clytemnestra; Orpheus was transformed into a swan after his murder, and was said to have been placed in the sky next to his lyre (Lyra); and a man named Cygnus (Greek for "swan") was transformed into his namesake.
Later Romans also associated this constellation with the tragic story of Phaethon, the son of Helios the sun god, who demanded to ride his father's sun chariot for a day. Phaethon, however, was unable to control the reins, forcing Zeus to destroy the chariot (and Phaethon) with a thunderbolt, causing it to plummet to the earth into the river Eridanus. According to the myth, Phaethon's close friend or lover, Cygnus of Liguria, grieved bitterly and spent many days diving into the river to collect Phaethon's bones to give him a proper burial. The gods were so touched by Cygnus's devotion that they turned him into a swan and placed him among the stars.
In Ovid's "Metamorphoses", there are three people named Cygnus, all of whom are transformed into swans. Alongside Cygnus, noted above, he mentions a boy from Aetolia who throws himself off a cliff when his companion Phyllius refuses to give him a tamed bull that he demands, but he is transformed into a swan and flies away. He also mentions a son of Poseidon, an invulnerable warrior in the Trojan War who is eventually killed by Achilles, but Poseidon saves him by transforming him into a swan.
Together with other avian constellations near the summer solstice, Vultur cadens and Aquila, Cygnus may be a significant part of the origin of the myth of the Stymphalian Birds, one of The Twelve Labours of Hercules.
Characteristics.
A very large constellation, Cygnus is bordered by Cepheus to the north and east, Draco to the north and west, Lyra to the west, Vulpecula to the south, Pegasus to the southeast and Lacerta to the east. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Cyg". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 28 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 27.73° and 61.36°. Covering 804 square degrees and around 1.9% of the night sky, Cygnus ranks 16th of the 88 constellations in size.
Cygnus culminates at midnight on 29 June, and is most visible in the evening from the early summer to mid-autumn in the Northern Hemisphere.
Normally, Cygnus is depicted with Delta and Epsilon Cygni as its wings. Deneb, the brightest in the constellation is at its tail, and Albireo as the tip of its beak.
There are several asterisms in Cygnus. In the 17th-century German celestial cartographer Johann Bayer's star atlas the "Uranometria", Alpha, Beta and Gamma Cygni form the pole of a cross, while Delta and Epsilon form the cross beam. The nova P Cygni was then considered to be the body of Christ.
Features.
There is an abundance of deep-sky objects, with many open clusters, nebulae of various types and supernova remnants found in Cygnus due to its position on the Milky Way.
Its molecular clouds form the Cygnus Rift dark nebula constellation, comprising one end of the Great Rift along the Milky Way's galactic plane. The rift begins around the Northern Coalsack, and partially obscures the larger Cygnus molecular cloud complex behind it, which the North America Nebula is part of.
Stars.
Bayer catalogued many stars in the constellation, giving them the Bayer designations from Alpha to Omega and then using lowercase Roman letters to g. John Flamsteed added the Roman letters h, i, k, l and m (these stars were considered "informes" by Bayer as they lay outside the asterism of Cygnus), but were dropped by Francis Baily.
There are several bright stars in Cygnus. α Cygni, called Deneb, is the brightest star in Cygnus. It is a white supergiant star of spectral type A2Iae that varies between magnitudes 1.21 and 1.29, one of the largest and most luminous A-class stars known. It is located about 2600 light-years away. Its traditional name means "tail" and refers to its position in the constellation. Albireo, designated β Cygni, is a celebrated binary star among amateur astronomers for its contrasting hues. The primary is an orange-hued giant star of magnitude 3.1 and the secondary is a blue-green hued star of magnitude 5.1. The system is 430 light-years away and is visible in large binoculars and all amateur telescopes. γ Cygni, traditionally named Sadr, is a yellow-tinged supergiant star of magnitude 2.2, 1800 light-years away. Its traditional name means "breast" and refers to its position in the constellation. δ Cygni (the proper name is Fawaris) is another bright binary star in Cygnus, 166 light-years with a period of 800 years. The primary is a blue-white hued giant star of magnitude 2.9, and the secondary is a star of magnitude 6.6. The two components are visible in a medium-sized amateur telescope. The fifth star in Cygnus above magnitude 3 is Aljanah, designated ε Cygni. It is an orange-hued giant star of magnitude 2.5, 72 light-years from Earth.
There are several other dimmer double and binary stars in Cygnus. μ Cygni is a binary star with an optical tertiary component. The binary system has a period of 790 years and is 73 light-years from Earth. The primary and secondary, both white stars, are of magnitude 4.8 and 6.2, respectively. The unrelated tertiary component is of magnitude 6.9. Though the tertiary component is visible in binoculars, the primary and secondary currently require a medium-sized amateur telescope to split, as they will through the year 2020. The two stars will be closest between 2043 and 2050, when they will require a telescope with larger aperture to split. The stars 30 and 31 Cygni form a contrasting double star similar to the brighter Albireo. The two are visible in binoculars. The primary, 31 Cygni, is an orange-hued star of magnitude 3.8, 1400 light-years from Earth. The secondary, 30 Cygni, appears blue-green. It is of spectral type A5IIIn and magnitude 4.83, and is around 610 light-years from Earth. 31 Cygni itself is a binary star; the tertiary component is a blue star of magnitude 7.0. ψ Cygni is a binary star visible in small amateur telescopes, with two white components. The primary is of magnitude 5.0 and the secondary is of magnitude 7.5. 61 Cygni is a binary star visible in large binoculars or a small amateur telescope. It is 11.4 light-years from Earth and has a period of 750 years. Both components are orange-hued dwarf (main sequence) stars; the primary is of magnitude 5.2 and the secondary is of magnitude 6.1. 61 Cygni is significant because Friedrich Wilhelm Bessel determined its parallax in 1838, the first star to have a known parallax.
Located near η Cygni is the X-ray source Cygnus X-1, which is now thought to be caused by a black hole accreting matter in a binary star system. This was the first X-ray source widely believed to be a black hole. It is located approximately 2.2 kiloparsecs from the Sun. There is also supergiant variable star in the system which is known as HDE 226868.
Cygnus also contains several other noteworthy X-ray sources. Cygnus X-3 is a microquasar containing a Wolf–Rayet star in orbit around a very compact object, with a period of only 4.8 hours. The system is one of the most intrinsically luminous X-ray sources observed. The system undergoes periodic outbursts of unknown nature, and during one such outburst, the system was found to be emitting muons, likely caused by neutrinos. While the compact object is thought to be a neutron star or possibly a black hole, it is possible that the object is instead a more exotic stellar remnant, possibly the first discovered quark star, hypothesized due to its production of cosmic rays that cannot be explained if the object is a normal neutron star. The system also emits cosmic rays and gamma rays, and has helped shed insight on to the formation of such rays. Cygnus X-2 is another X-ray binary, containing an A-type giant in orbit around a neutron star with a 9.8-day period. The system is interesting due to the rather small mass of the companion star, as most millisecond pulsars have much more massive companions. Another black hole in Cygnus is V404 Cygni, which consists of a K-type star orbiting around a black hole of around 12 solar masses. The black hole, similar to that of Cygnus X-3, has been hypothesized to be a quark star. 4U 2129+ 47 is another X-ray binary containing a neutron star which undergoes outbursts, as is EXO 2030+ 375.
Cygnus is also home to several variable stars. SS Cygni is a dwarf nova which undergoes outbursts every 7–8 weeks. The system's total magnitude varies from 12th magnitude at its dimmest to 8th magnitude at its brightest. The two objects in the system are incredibly close together, with an orbital period of less than 0.28 days. χ Cygni is a red giant and the second-brightest Mira variable star at its maximum. It ranges between magnitudes 3.3 and 14.2, and spectral types S6,2e to S10,4e (MSe) over a period of 408 days; it has a diameter of 300 solar diameters and is 350 light-years from Earth. P Cygni is a luminous blue variable that brightened suddenly to 3rd magnitude in 1600 AD. Since 1715, the star has been of 5th magnitude, despite being more than 5000 light-years from Earth. The star's spectrum is unusual in that it contains very strong emission lines resulting from surrounding nebulosity. W Cygni is a semi-regular variable red giant star, 618 light-years from Earth.It has a maximum magnitude of 5.10 and a minimum magnitude 6.83; its period of 131 days. It is a red giant ranging between spectral types M4e-M6e(Tc:)III, NML Cygni is a red hypergiant semi-regular variable star located at 5,300 light-years away from Earth. It is one of largest stars currently known in the galaxy with a radius exceeding 1,000 solar radii. Its magnitude is around 16.6, its period is about 940 days.
The star KIC 8462852 (Tabby's Star) has received widespread press coverage because of unusual light fluctuations.
Exoplanets.
Cygnus is one of the constellations that the Kepler satellite surveyed in its search for exoplanets, and as a result, there are about a hundred stars in Cygnus with known planets, the most of any constellation. One of the most notable systems is the Kepler-11 system, containing six transiting planets, all within a plane of approximately one degree. It was the system with six exoplanets to be discovered. With a spectral type of G6V, the star is somewhat cooler than the Sun. All the planets are more massive than Earth, and all have low densities; and all but one are closer to Kepler-11 than Mercury is to the Sun. The naked-eye star 16 Cygni, a triple star approximately 70 light-years from Earth composed two Sun-like stars and a red dwarf, contains a planet orbiting one of the sun-like stars, found due to variations in the star's radial velocity. Gliese 777, another naked-eye multiple star system containing a yellow star and a red dwarf, also contains a planet. The planet is somewhat similar to Jupiter, but with slightly more mass and a more eccentric orbit. The Kepler-22 system is also notable for having the most Earth-like exoplanet when it was discovered in 2011.
Star clusters.
The rich background of stars of Cygnus can make it difficult to make out open cluster.
M39 (NGC 7092) is an open cluster 950 light-years from Earth that are visible to the unaided eye under dark skies. It is loose, with about 30 stars arranged over a wide area; their conformation appears triangular. The brightest stars of M39 are of the 7th magnitude. Another open cluster in Cygnus is NGC 6910, also called the Rocking Horse Cluster, possessing 16 stars with a diameter of 5 arcminutes visible in a small amateur instrument; it is of magnitude 7.4. The brightest of these are two gold-hued stars, which represent the bottom of the toy it is named for. A larger amateur instrument reveals 8 more stars, nebulosity to the east and west of the cluster, and a diameter of 9 arcminutes. The nebulosity in this region is part of the Gamma Cygni Nebula. The other stars, approximately 3700 light-years from Earth, are mostly blue-white and very hot.
Other open clusters in Cygnus include Dolidze 9, Collinder 421, Dolidze 11, and Berkeley 90. Dolidze 9, 2800 light-years from Earth and relatively young at 20 million light-years old, is a faint open cluster with up to 22 stars visible in small and medium-sized amateur telescopes. Nebulosity is visible to the north and east of the cluster, which is 7 arcminutes in diameter. The brightest star appears in the eastern part of the cluster and is of the 7th magnitude; another bright star has a yellow hue. Dolidze 11 is an open cluster 400 million years old, farthest away of the three at 3700 light-years. More than 10 stars are visible in an amateur instrument in this cluster, of similar size to Dolidze 9 at 7 arcminutes in diameter, whose brightest star is of magnitude 7.5. It, too, has nebulosity in the east. Collinder 421 is a particularly old open cluster at an age of approximately 1 billion years; it is of magnitude 10.1. 3100 light-years from Earth, more than 30 stars are visible in a diameter of 8 arcseconds. The prominent star in the north of the cluster has a golden color, whereas the stars in the south of the cluster appear orange. Collinder 421 appears to be embedded in nebulosity, which extends past the cluster's borders to its west. Berkeley 90 is a smaller open cluster, with a diameter of 5 arcminutes. More than 16 members appear in an amateur telescope.
Molecular clouds.
NGC 6826, the Blinking Planetary Nebula, is a planetary nebula with a magnitude of 8.5, 3200 light-years from Earth. It appears to "blink" in the eyepiece of a telescope because its central star is unusually bright (10th magnitude). When an observer focuses on the star, the nebula appears to fade away. Less than one degree from the Blinking Planetary is the double star 16 Cygni.
The North America Nebula (NGC 7000) is one of the most well-known nebulae in Cygnus, because it is visible to the unaided eye under dark skies, as a bright patch in the Milky Way. However, its characteristic shape is only visible in long-exposure photographs – it is difficult to observe in telescopes because of its low surface brightness. It has low surface brightness because it is so large; at its widest, the North America Nebula is 2 degrees across. Illuminated by a hot embedded star of magnitude 6, NGC 7000 is 1500 light-years from Earth.
To the south of Epsilon Cygni is the Veil Nebula (NGC 6960, 6979, 6992, and 6995), a 5,000-year-old supernova remnant covering approximately 3 degrees of the sky - it is over 50 light-years long. Because of its appearance, it is also called the Cygnus Loop. The Loop is only visible in long-exposure astrophotographs. However, the brightest portion, NGC 6992, is faintly visible in binoculars, and a dimmer portion, NGC 6960, is visible in wide-angle telescopes.
The DR 6 cluster is also nicknamed the "Galactic Ghoul" because of the nebula's resemblance to a human face;
The Gamma Cygni Nebula (IC 1318) includes both bright and dark nebulae in an area of over 4 degrees. DWB 87 is another of the many bright emission nebulae in Cygnus, 7.8 by 4.3 arcminutes. It is in the Gamma Cygni area. Two other emission nebulae include Sharpless 2-112 and Sharpless 2-115. When viewed in an amateur telescope, Sharpless 2–112 appears to be in a teardrop shape. More of the nebula's eastern portion is visible with an O III (doubly ionized oxygen) filter. There is an orange star of magnitude 10 nearby and a star of magnitude 9 near the nebula's northwest edge. Further to the northwest, there is a dark rift and another bright patch. The whole nebula measures 15 arcminutes in diameter. Sharpless 2–115 is another emission nebula with a complex pattern of light and dark patches. Two pairs of stars appear in the nebula; it is larger near the southwestern pair. The open cluster Berkeley 90 is embedded in this large nebula, which measures 30 by 20 arcminutes.
Also of note is the Crescent Nebula (NGC 6888), located between Gamma and Eta Cygni, which was formed by the Wolf–Rayet star HD 192163.
In recent years, amateur astronomers have made some notable Cygnus discoveries. The "Soap bubble nebula" (PN G75.5+1.7), near the Crescent nebula, was discovered on a digital image by Dave Jurasevich in 2007. In 2011, Austrian amateur Matthias Kronberger discovered a planetary nebula (Kronberger 61, now nicknamed "The Soccer Ball") on old survey photos, confirmed recently in images by the Gemini Observatory; both of these are likely too faint to be detected by eye in a small amateur scope.
But a much more obscure and relatively 'tiny' object—one which is readily seen in dark skies by amateur telescopes, under good conditions—is the newly discovered nebula (likely reflection type) associated with the star 4 Cygni (HD 183056): an approximately fan-shaped glowing region of several arcminutes' diameter, to the south and west of the fifth-magnitude star. It was first discovered visually near San Jose, California and publicly reported by amateur astronomer Stephen Waldee in 2007, and was confirmed photographically by Al Howard in 2010. California amateur astronomer Dana Patchick also says he detected it on the Palomar Observatory survey photos in 2005 but had not published it for others to confirm and analyze at the time of Waldee's first official notices and later 2010 paper.
Cygnus X is the largest star-forming region in the solar neighborhood and includes not only some of the brightest and most massive stars known (such as Cygnus OB2-12), but also Cygnus OB2, a massive stellar association classified by some authors as a young globular cluster.
Deep space objects.
Cygnus A is the first radio galaxy discovered; at a distance of 730 million light-years from Earth, it is the closest powerful radio galaxy. In the visible spectrum, it appears as an elliptical galaxy in a small cluster. It is classified as an active galaxy because the supermassive black hole at its nucleus is accreting matter, which produces two jets of matter from the poles. The jets' interaction with the interstellar medium creates radio lobes, one source of radio emissions.
Other features.
Cygnus is also the apparent source of the WIMP-wind due to the orientation of the solar system's rotation through the galactic halo.
The local Orion-Cygnus Arm and the distant Cygnus Arm are two minor galactic arms named after Cygnus for lying in its background.
|
6423
|
49934550
|
https://en.wikipedia.org/wiki?curid=6423
|
Calorie
|
The calorie is a unit of energy that originated from the caloric theory of heat. The large calorie, food calorie, dietary calorie, kilocalorie, or kilogram calorie is defined as the amount of heat needed to raise the temperature of one liter of water by one degree Celsius (or one kelvin). The small calorie or gram calorie is defined as the amount of heat needed to cause the same increase in one milliliter of water. Thus, 1 large calorie is equal to 1,000 small calories.
In nutrition and food science, the term "calorie" and the symbol "cal" may refer to the large unit or to the small unit in different regions of the world. It is generally used in publications and package labels to express the energy value of foods in per serving or per weight, recommended dietary caloric intake, metabolic rates, etc. Some authors recommend the spelling "Calorie" and the symbol "Cal" (both with a capital C) if the large calorie is meant, to avoid confusion; however, this convention is often ignored.
In physics and chemistry, the word "calorie" and its symbol usually refer to the small unit, the large one being called "kilocalorie" (kcal). However, the kcal is not officially part of the International System of Units (SI), and is regarded as obsolete, having been replaced in many uses by the SI derived unit of energy, the joule (J), or the kilojoule (kJ) for 1000 joules.
The precise equivalence between calories and joules has varied over the years, but in thermochemistry and nutrition it is now generally assumed that one (small) calorie (thermochemical calorie) is equal to exactly 4.184 J, and therefore one kilocalorie (one large calorie) is 4184 J or 4.184 kJ.
History.
The term "calorie" comes . It was first introduced by Nicolas Clément, as a unit of heat energy, in lectures on experimental calorimetry during the years 1819–1824. This was the "large" calorie. The term (written with lowercase "c") entered French and English dictionaries between 1841 and 1867.
The same term was used for the "small" unit by Pierre Antoine Favre (chemist) and Johann T. Silbermann (physicist) in 1852.
In 1879, Marcellin Berthelot distinguished between gram-calorie and kilogram-calorie, and proposed using "Calorie", with capital "C", for the large unit. This usage was adopted by Wilbur Olin Atwater, a professor at Wesleyan University, in 1887, in an influential article on the energy content of food.
The smaller unit was used by U.S. physician Joseph Howard Raymond, in his classic 1894 textbook "A Manual of Human Physiology". He proposed calling the "large" unit "kilocalorie", but the term did not catch on until some years later.
The small calorie (cal) was recognized as a unit of the CGS system in 1896, alongside the already-existing CGS unit of energy, the erg (first suggested by Clausius in 1864, under the name "ergon", and officially adopted in 1882).
In 1928, there were already serious complaints about the possible confusion arising from the two main definitions of the calorie and whether the notion of using the capital letter to distinguish them was sound.
The joule was the officially adopted SI unit of energy at the ninth General Conference on Weights and Measures in 1948. The calorie was mentioned in the 7th edition of the SI brochure as an example of a non-SI unit.
The alternate spelling is a less-common, non-standard variant.
Definitions.
The "small" calorie is broadly defined as the amount of energy needed to increase the temperature of 1 gram of water by 1 °C (or 1 K, which is the same increment, a gradation of one percent of the interval between the melting point and the boiling point of water). The actual amount of energy required to accomplish this temperature increase depends on the atmospheric pressure and the starting temperature; different choices of these parameters have resulted in several different precise definitions of the unit.
The two definitions most common in older literature appear to be the "15 °C calorie" and the "thermochemical calorie". Until 1948, the latter was defined as 4.1833 international joules; the current standard of 4.184 J was chosen to have the new thermochemical calorie represent the same quantity of energy as before.
Usage.
Nutrition.
In the United States and Canada, in a nutritional context, the "large" unit is used almost exclusively. It is generally written "calorie" with lowercase "c" and symbol "cal", even in government publications. The SI unit kilojoule (kJ) may be used instead, in legal or scientific contexts. Most American nutritionists prefer the unit kilocalorie to the unit kilojoules, whereas most physiologists prefer to use kilojoules. In the majority of other countries, nutritionists prefer the kilojoule to the kilocalorie.
In the European Union, on nutrition facts labels, energy is expressed in both kilojoules and kilocalories, abbreviated as "kJ" and "kcal" respectively.
In China, only kilojoules are given.
Food energy.
The unit is most commonly used to express food energy, namely the specific energy (energy per mass) of metabolizing different types of food. For example, fat (triglyceride lipids) contains 9 kilocalories per gram (kcal/g), while carbohydrates (sugar and starch) and protein contain approximately 4 kcal/g. Alcohol in food contains 7 kcal/g. The "large" unit is also used to express recommended nutritional intake or consumption, as in "calories per day".
Dieting is the practice of eating food in a regulated way to decrease, maintain, or increase body weight, or to prevent and treat diseases such as diabetes and obesity. As weight loss depends on reducing caloric intake, different kinds of calorie-reduced diets have been shown to be generally effective.
Chemistry and physics.
In other scientific contexts, the term "calorie" and the symbol "cal" almost always refers to the small unit; the "large" unit being generally called "kilocalorie" with symbol "kcal". It is mostly used to express the amount of energy released in a chemical reaction or phase change, typically per mole of substance, as in kilocalories per mole. It is also occasionally used to specify other energy quantities that relate to reaction energy, such as enthalpy of formation and the size of activation barriers. However, it is increasingly being superseded by the SI unit, the joule (J); and metric multiples thereof, such as the kilojoule (kJ).
The lingering use in chemistry is largely because the energy released by a reaction in aqueous solution, expressed in kilocalories per mole of reagent, is numerically close to the concentration of the reagent in moles per liter multiplied by the change in the temperature of the solution in kelvins or degrees Celsius. However, this estimate assumes that the volumetric heat capacity of the solution is 1 kcal/(L⋅K), which is not exact even for pure water.
|
6424
|
42021989
|
https://en.wikipedia.org/wiki?curid=6424
|
Corona Australis
|
Corona Australis is a constellation in the Southern Celestial Hemisphere. Its Latin name means "southern crown", and it is the southern counterpart of Corona Borealis, the northern crown. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The Ancient Greeks saw Corona Australis as a wreath rather than a crown and associated it with Sagittarius or Centaurus. Other cultures have likened the pattern to a turtle, ostrich nest, a tent, or even a hut belonging to a rock hyrax.
Although fainter than its northern counterpart, the oval- or horseshoe-shaped pattern of its brighter stars renders it distinctive. Alpha and Beta Coronae Australis are the two brightest stars with an apparent magnitude of around 4.1. Epsilon Coronae Australis is the brightest example of a W Ursae Majoris variable in the southern sky. Lying alongside the Milky Way, Corona Australis contains one of the closest star-forming regions to the Solar System—a dusty dark nebula known as the Corona Australis Molecular Cloud, lying about 430 light years away. Within it are stars at the earliest stages of their lifespan. The variable stars R and TY Coronae Australis light up parts of the nebula, which varies in brightness accordingly.
Name.
The name of the constellation was entered as "Corona Australis" when the International Astronomical Union (IAU) established the 88 modern constellations in 1922.
In 1932, the name was instead recorded as "Corona Austrina" when the IAU's commission on notation approved a list of four-letter abbreviations for the constellations.
The four-letter abbreviations were repealed in 1955. The IAU presently uses "Corona Australis" exclusively.
Characteristics.
Corona Australis is a small constellation bordered by Sagittarius to the north, Scorpius to the west, Telescopium to the south, and Ara to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrA". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.77° and −45.52°. Covering 128 square degrees, Corona Australis culminates at midnight around the 30th of June and ranks 80th in area. Only visible at latitudes south of 53° north, Corona Australis cannot be seen from the British Isles as it lies too far south, but it can be seen from southern Europe and readily from the southern United States.
Features.
While not a bright constellation, Corona Australis is nonetheless distinctive due to its easily identifiable pattern of stars, which has been described as horseshoe- or oval-shaped. Though it has no stars brighter than 4th magnitude, it still has 21 stars visible to the unaided eye (brighter than magnitude 5.5). Nicolas Louis de Lacaille used the Greek letters Alpha through to Lambda to label the most prominent eleven stars in the constellation, designating two stars as Eta and omitting Iota altogether. Mu Coronae Australis, a yellow star of spectral type G5.5III and apparent magnitude 5.21, was labelled by Johann Elert Bode and retained by Benjamin Gould, who deemed it bright enough to warrant naming.
Stars.
The only star in the constellation to have received a name is Alfecca Meridiana or Alpha CrA. The name combines the Arabic name of the constellation with the Latin for "southern". In Arabic, "Alfecca" means "break", and refers to the shape of both Corona Australis and Corona Borealis. Also called simply "Meridiana", it is a white main sequence star located 125 light years away from Earth, with an apparent magnitude of 4.10 and spectral type A2Va. A rapidly rotating star, it spins at almost 200 km per second at its equator, making a complete revolution in around 14 hours. Like the star Vega, it has excess infrared radiation, which indicates it may be ringed by a disk of dust. It is currently a main-sequence star, but will eventually evolve into a white dwarf; currently, it has a luminosity 31 times greater, and a radius and mass of 2.3 times that of the Sun. Beta Coronae Australis is an orange giant 474 light years from Earth. Its spectral type is K0II, and it is of apparent magnitude 4.11. Since its formation, it has evolved from a B-type star to a K-type star. Its luminosity class places it as a bright giant; its luminosity is 730 times that of the Sun, designating it one of the highest-luminosity K0-type stars visible to the naked eye. 100 million years old, it has a radius of 43 solar radii () and a mass of between 4.5 and 5 solar masses (). Alpha and Beta are so similar as to be indistinguishable in brightness to the naked eye.
Some of the more prominent double stars include Gamma Coronae Australis—a pair of yellowish white stars 58 light years away from Earth, which orbit each other every 122 years. Widening since 1990, the two stars can be seen as separate with a 100 mm aperture telescope; they are separated by 1.3 arcseconds at an angle of 61 degrees. They have a combined visual magnitude of 4.2; each component is an F8V dwarf star with a magnitude of 5.01. Epsilon Coronae Australis is an eclipsing binary belonging to a class of stars known as W Ursae Majoris variables. These star systems are known as contact binaries as the component stars are so close together they touch. Varying by a quarter of a magnitude around an average apparent magnitude of 4.83 every seven hours, the star system lies 98 light years away. Its spectral type is F4VFe-0.8+. At the southern end of the crown asterism are the stars Eta1 and Eta2 CrA, which form an optical double. Of magnitude 5.1 and 5.5, they are separable with the naked eye and are both white. Kappa Coronae Australis is an easily resolved optical double—the components are of apparent magnitudes 6.3 and 5.6 and are about 1000 and 150 light years away respectively. They appear at an angle of 359 degrees, separated by 21.6 arcseconds. Kappa2 is actually the brighter of the pair and is more bluish white, with a spectral type of B9V, while Kappa1 is of spectral type A0III. Lying 202 light years away, Lambda Coronae Australis is a double splittable in small telescopes. The primary is a white star of spectral type A2Vn and magnitude of 5.1, while the companion star has a magnitude of 9.7. The two components are separated by 29.2 arcseconds at an angle of 214 degrees.
Zeta Coronae Australis is a rapidly rotating main sequence star with an apparent magnitude of 4.8, 221.7 light years from Earth. The star has blurred lines in its hydrogen spectrum due to its rotation. Its spectral type is B9V. Theta Coronae Australis lies further to the west, a yellow giant of spectral type G8III and apparent magnitude 4.62. Corona Australis harbours RX J1856.5-3754, an isolated neutron star that is thought to lie 140 (±40) parsecs, or 460 (±130) light years, away, with a diameter of 14 km. It was once suspected to be a strange star, but this has been discounted.
Corona Australis Molecular Cloud.
The Corona Australis Molecular Cloud is a dark molecular cloud just north of Beta Coronae Australis. Illuminated by a number of embedded reflection nebulae the cloud fans out from Epsilon Coronae Australis eastward along the constellation border with Sagittarius. It contains , Herbig–Haro objects (protostars) and some very young stars, being one of the closest star-forming regions, 430 light years (130 parsecs) to the Solar System, at the surface of the Local Bubble. The first nebulae of the cloud were recorded in 1865 by Johann Friedrich Julius Schmidt.
Between Epsilon and Gamma Coronae Australis the cloud consists of the particular dark nebula and star forming region Bernes 157. It is 55 by 18 arcminutes wide and possesses several stars around magnitude 13. These stars are dimmed by up to 8 magnitudes because of the obscuring dust clouds. At the center of the active star-forming region lies the Coronet cluster (also called R CrA Cluster), which is used in studying star and protoplanetary disk formation. R Coronae Australis (R CrA) is an irregular variable star ranging from magnitudes 9.7 to 13.9. Blue-white, it is of spectral type B5IIIpe. A very young star, it is still accumulating interstellar material. It is obscured by, and illuminates, the surrounding nebula, NGC 6729, which brightens and darkens with it. The nebula is often compared to a comet for its appearance in a telescope, as its length is five times its width. Other stars of the cluster include S Coronae Australis, a G-class dwarf and T Tauri star.
Nearby north, another young variable star, TY Coronae Australis, illuminates another nebula: reflection nebula NGC 6726/NGC 6727. TY Coronae Australis ranges irregularly between magnitudes 8.7 and 12.4, and the brightness of the nebula varies with it. Blue-white, it is of spectral type B8e. The largest young stars in the region, R, S, T, TY and VV Coronae Australis, are all ejecting jets of material which cause surrounding dust and gas to coalesce and form Herbig–Haro objects, many of which have been identified nearby.
Not part of it is the globular cluster known as NGC 6723, which can be seen adjacent to the nebulosity in the neighbouring constellation of Sagittarius, but is much much further away.
Deep sky objects.
IC 1297 is a planetary nebula of apparent magnitude 10.7, which appears as a green-hued roundish object in higher-powered amateur instruments. The nebula surrounds the variable star RU Coronae Australis, which has an average apparent magnitude of 12.9 and is a WC class Wolf–Rayet star. IC 1297 is small, at only 7 arcseconds in diameter; it has been described as "a square with rounded edges" in the eyepiece, elongated in the north–south direction. Descriptions of its color encompass blue, blue-tinged green, and green-tinged blue.
Corona Australis' location near the Milky Way means that galaxies are uncommonly seen. NGC 6768 is a magnitude 11.2 object 35′ south of IC 1297. It is made up of two galaxies merging, one of which is an elongated elliptical galaxy of classification E4 and the other a lenticular galaxy of classification S0. IC 4808 is a galaxy of apparent magnitude 12.9 located on the border of Corona Australis with the neighbouring constellation of Telescopium and 3.9 degrees west-southwest of Beta Sagittarii. However, amateur telescopes will only show a suggestion of its spiral structure. It is 1.9 arcminutes by 0.8 arcminutes. The central area of the galaxy does appear brighter in an amateur instrument, which shows it to be tilted northeast–southwest.
Southeast of Theta and southwest of Eta lies the open cluster ESO 281-SC24, which is composed of the yellow 9th magnitude star GSC 7914 178 1 and five 10th to 11th magnitude stars. Halfway between Theta Coronae Australis and Theta Scorpii is the dense globular cluster NGC 6541. Described as between magnitude 6.3 and magnitude 6.6, it is visible in binoculars and small telescopes. Around 22000 light years away, it is around 100 light years in diameter. It is estimated to be around 14 billion years old. NGC 6541 appears 13.1 arcminutes in diameter and is somewhat resolvable in large amateur instruments; a 12-inch telescope reveals approximately 100 stars but the core remains unresolved.
Meteor showers.
The Corona Australids are a meteor shower that takes place between 14 and 18 March each year, peaking around 16 March. This meteor shower does not have a high peak hourly rate. In 1953 and 1956, observers noted a maximum of 6 meteors per hour and 4 meteors per hour respectively; in 1955 the shower was "barely resolved". However, in 1992, astronomers detected a peak rate of 45 meteors per hour. The Corona Australids' rate varies from year to year. At only six days, the shower's duration is particularly short, and its meteoroids are small; the stream is devoid of large meteoroids. The Corona Australids were first seen with the unaided eye in 1935 and first observed with radar in 1955. Corona Australid meteors have an entry velocity of 45 kilometers per second. In 2006, a shower originating near Beta Coronae Australis was designated as the Beta Coronae Australids. They appear in May, the same month as a nearby shower known as the May Microscopids, but the two showers have different trajectories and are unlikely to be related.
History.
Corona Australis may have been recorded by ancient Mesopotamians in the MUL.APIN, as a constellation called MA.GUR ("The Bark"). However, this constellation, adjacent to SUHUR.MASH ("The Goat-Fish", modern Capricornus), may instead have been modern Epsilon Sagittarii. As a part of the southern sky, MA.GUR was one of the fifteen "stars of Ea".
In the 3rd century BC, the Greek didactic poet Aratus wrote of, but did not name the constellation, instead calling the two crowns Στεφάνοι ("Stephanoi"). The Greek astronomer Ptolemy described the constellation in the 2nd century AD, though with the inclusion of Alpha Telescopii, since transferred to Telescopium. Ascribing 13 stars to the constellation, he named it Στεφάνος νοτιος (), "Southern Wreath", while other authors associated it with either Sagittarius (having fallen off his head) or Centaurus; with the former, it was called "Corona Sagittarii". Similarly, the Romans called Corona Australis the "Golden Crown of Sagittarius". It was known as "Parvum Coelum" ("Canopy", "Little Sky") in the 5th century. The 18th-century French astronomer Jérôme Lalande gave it the names "Sertum Australe" ("Southern Garland") and "Orbiculus Capitis", while German poet and author Philippus Caesius called it "Corolla" ("Little Crown") or "Spira Australis" ("Southern Coil"), and linked it with the Crown of Eternal Life from the New Testament. Seventeenth-century celestial cartographer Julius Schiller linked it to the Diadem of Solomon. Sometimes, Corona Australis was not the wreath of Sagittarius but arrows held in his hand.
Corona Australis has been associated with the myth of Bacchus and Stimula. Jupiter had impregnated Stimula, causing Juno to become jealous. Juno convinced Stimula to ask Jupiter to appear in his full splendor, which the mortal woman could not handle, causing her to burn. After Bacchus, Stimula's unborn child, became an adult and the god of wine, he honored his deceased mother by placing a wreath in the sky.
In Chinese astronomy, the stars of Corona Australis are located within the Black Tortoise of the North (北方玄武, "Běi Fāng Xuán Wǔ"). The constellation itself was known as "ti'en pieh" ("Heavenly Turtle") and during the Western Zhou period, marked the beginning of winter. However, precession over time has meant that the "Heavenly River" (Milky Way) became the more accurate marker to the ancient Chinese and hence supplanted the turtle in this role. Arabic names for Corona Australis include "Al Ķubbah" "the Tortoise", "Al Ĥibā" "the Tent" or "Al Udḥā al Na'ām" "the Ostrich Nest". It was later given the name "Al Iklīl al Janūbiyyah", which the European authors Chilmead, Riccioli and Caesius transliterated as Alachil Elgenubi, Elkleil Elgenubi and Aladil Algenubi respectively.
The ǀXam speaking San people of South Africa knew the constellation as "≠nabbe ta !nu" "house of branches"—owned originally by the Dassie (rock hyrax), and the star pattern depicting people sitting in a semicircle around a fire.
The indigenous Boorong people of northwestern Victoria saw it as "Won", a boomerang thrown by "Totyarguil" (Altair). The Aranda people of Central Australia saw Corona Australis as a coolamon carrying a baby, which was accidentally dropped to earth by a group of sky-women dancing in the Milky Way. The impact of the coolamon created Gosses Bluff crater, 175 km west of Alice Springs. The Torres Strait Islanders saw Corona Australis as part of a larger constellation encompassing part of Sagittarius and the tip of Scorpius's tail; the Pleiades and Orion were also associated. This constellation was Tagai's canoe, crewed by the Pleiades, called the "Usiam", and Orion, called the "Seg". The myth of Tagai says that he was in charge of this canoe, but his crewmen consumed all of the supplies onboard without asking permission. Enraged, Tagai bound the Usiam with a rope and tied them to the side of the boat, then threw them overboard. Scorpius's tail represents a suckerfish, while Eta Sagittarii and Theta Corona Australis mark the bottom of the canoe. On the island of Futuna, the figure of Corona Australis was called "Tanuma" and in the Tuamotus, it was called "Na Kaua-ki-Tonga".
References.
Sources.
"SIMBAD"
|
6426
|
25511559
|
https://en.wikipedia.org/wiki?curid=6426
|
Corcovado
|
Corcovado (; meaning "Hunchback") is a mountain in central Rio de Janeiro, Brazil. It is a granite peak located in the Tijuca Forest, a national park.
Corcovado hill lies just west of the city center but is wholly within the city limits and visible from great distances. It is known worldwide for the statue of Jesus atop its peak, entitled "Christ the Redeemer".
Access.
The peak and statue can be reached via a narrow road, by the Corcovado Rack Railway, which was opened in 1884 and refurbished in 1980, or by the walking trail on the south side of the mountain that starts from Parque Lage. The railway uses three electrically powered trains, with a capacity of 540 passengers per hour. The rail trip takes approximately 20 minutes and departs every 20 minutes. Due to its limited passenger capacity, the wait to board at the entry station can take several hours. The year-round schedule is 8:30 to 18:30.
From the train terminus and road, the observation deck at the foot of the statue is reached by 223 steps, or by elevators and escalators. Among the most popular year-round tourist attractions in Rio de Janeiro, the Corcovado railway, access roads, and statue platform are commonly crowded.
Attractions.
Corcovado's most popular attraction is the statue depicting Jesus at its peak, entitled "Christ the Redeemer" (")," and the viewing platform at its peak, drawing over 300,000 visitors per year. The statue was constructed from 1922 to 1931. From the peak's platform the panoramic view includes downtown Rio de Janeiro, Sugarloaf Mountain, the Rodrigo de Freitas lagoon, Copacabana and Ipanema beaches, Maracanã Stadium, and several of Rio de Janeiro's favelas. Cloud cover is common in Rio and the view from the platform is often obscured. Sunny days are recommended for optimal viewing.
Notable past visitors to the mountain peak include Charles Darwin, Pope Pius XII, Pope John Paul II, Alberto Santos-Dumont, Albert Einstein, Diana, Princess of Wales, General Sherman, and Karl Pilkington. An additional attraction of the mountain is rock climbing. The south face had 54 climbing routes in 1992. The easiest way starts from Parque Lage.
Geology.
The peak of Corcovado is a big granite dome, which describes a generally vertical rocky formation. It is claimed to be the highest such formation in Brazil, the second highest being Pedra Agulha, situated near the town of Pancas in Espírito Santo.
References in Brazilian culture.
Corcovado is considered an icon of Brazilian culture. "Corcovado" is a 1960 bossa nova song and jazz standard by Antônio Carlos Jobim whose lyrics draw on images of the hill. Corcovado has also been referenced in other artistic works (e.g. the lyrics of Ben Harper, literary works, films, etc.).
|
6427
|
35498457
|
https://en.wikipedia.org/wiki?curid=6427
|
Cheddar, Somerset
|
Cheddar is a large village and civil parish in the English county of Somerset. It is situated on the southern edge of the Mendip Hills, north-west of Wells, south-east of Weston-super-Mare and south-west of Bristol. The civil parish includes the hamlets of Nyland and Bradley Cross. The parish had a population of 5,755 in 2011 and an acreage of as of 1961.
Cheddar Gorge, on the northern edge of the village, is the largest gorge in the United Kingdom and includes several show caves, including Gough's Cave. The gorge has been a centre of human settlement since Neolithic times, including a Saxon palace. It has a temperate climate and provides a unique geological and biological environment that has been recognised by the designation of several Sites of Special Scientific Interest. It is also the site of several limestone quarries. The village gave its name to Cheddar cheese and has been a centre for strawberry growing. The crop was formerly transported on the Cheddar Valley rail line, which closed in the late 1960s and is now a cycle path. The village is now a major tourist destination with several cultural and community facilities, including the Cheddar Show Caves Museum.
The village supports a variety of community groups including religious, sporting and cultural organisations. Several of these are based on the site of the Kings of Wessex Academy, which is the largest educational establishment.
History.
Richard Coates, Professor Emeritus of Linguistics at the University of the West of England, has suggested that the name is "Ciw-dor," 'the door to Chew', referencing an idea that the gorge marked an important routeway through at least part of the Mendip watershed, and giving access between two large and important estates which had probably been a part of the Wessex royal demesne from the 7th century.
There is evidence of occupation from the Neolithic period in Cheddar. Britain's oldest complete human skeleton, Cheddar Man, estimated to be 9,000 years old, was found in Cheddar Gorge in 1903. Older remains from the Upper Late Palaeolithic era (12,000–13,000 years ago) have been found. There is some evidence of a Bronze Age field system at the Batts Combe quarry site. There is also evidence of Bronze Age barrows at the mound in the Longwood valley, which if man-made it is likely to be a field system. The remains of a Roman villa have been excavated in the grounds of the current vicarage.
The village of Cheddar had been important during the Roman and Saxon eras. There was a royal palace at Cheddar during the Saxon period, which was used on three occasions in the 10th century to host the Witenagemot. The ruins of the palace were excavated in the 1960s. They are located on the grounds of the Kings of Wessex Academy, together with a 14th-century chapel dedicated to St. Columbanus. Roman remains have also been uncovered at the site. Cheddar was listed in the Domesday Book of 1086 as "Cedre."
As early as 1130 AD, the Cheddar Gorge was recognised as one of the "Four wonders of England". Historically, Cheddar's source of wealth was farming and cheese making for which it was famous as early as 1170 AD. In the post-Conquest period, Cheddar emerges as a member of Somerset's Winterstoke Hundred. However, Frank Thorn has suggested that at a far earlier period, Cheddar lay at the centre of its own small hundred, and that it acted as the head place (or "caput") of a coherent group of three hundreds, namely Cheddar itself, Winterstoke and Bempstone (the latter containing Brent and Wedmore).
The manor of Cheddar was deforested in 1337 and Bishop Ralph was granted a licence by the King to create a hunting forest.
As early as 1527 there are records of watermills on the river. In the 17th and 18th centuries, there were several watermills which ground corn and made paper, with 13 mills on the Yeo at the peak, declining to seven by 1791 and just three by 1915. In the Victorian era it also became a centre for the production of clothing. The last mill, used as a shirt factory, closed in the early 1950s.
William Wilberforce saw the poor conditions of the locals when he visited Cheddar in 1789. He inspired Hannah More in her work to improve the conditions of the Mendip miners and agricultural workers. In 1801, of common land were enclosed under the (35 Geo. 3. c. "39" ).
Cheddar remained a more dispersed dairy-farming village until the advent of tourism and the arrival of the railway in the Victorian era. Tourism of the Cheddar gorge and caves began with the opening of the Cheddar Valley Railway in 1869.
Cheddar, its surrounding villages and specifically the gorge has been subject to flooding. In the Chew Stoke flood of 1968 the flow of water washed large boulders down the gorge, washed away cars, and damaged the cafe and the entrance to Gough's Cave.
Government.
Cheddar is recognised as a village. The adjacent settlement of Axbridge, although only about a third the population of Cheddar, is a town. This apparently illogical situation is explained by the relative importance of the two places in historic times. While Axbridge grew in importance as a centre for cloth manufacturing in the Tudor period and gained a charter from King John, Cheddar remained a more dispersed mining and dairy-farming village. Its population grew with the arrival of the railways in the Victorian era and the advent of tourism.
The parish council, which has 15 members who are elected for four years, is responsible for local issues, including setting an annual precept (local rate) to cover the council's operating costs and producing annual accounts for public scrutiny. The parish council evaluates local planning applications and works with the police, district council officers, and neighbourhood watch groups on matters of crime, security, and traffic. The parish council's role also includes initiating projects for the maintenance and repair of parish facilities, as well as consulting with the district council on the maintenance, repair, and improvement of highways, drainage, footpaths, public transport, and street cleaning. Conservation matters (including trees and listed buildings) and environmental issues are also the responsibility of the council.
The village is in the 'Cheddar and Shipham' electoral ward. After including Shipham the total population of the ward taken at the 2011 census is 6,842.
For local government purposes, since 1 April 2023, the village comes under the unitary authority of Somerset Council. Prior to this, it was part of the non-metropolitan district of Sedgemoor, which was formed on 1 April 1974 under the Local Government Act 1972, having previously been part of Axbridge Rural District. Fire, police and ambulance services are provided jointly with other authorities through the Devon and Somerset Fire and Rescue Service, Avon and Somerset Constabulary and the South Western Ambulance Service.
It is also part of the Wells and Mendip Hills county constituency represented in the House of Commons of the Parliament of the United Kingdom. It elects one Member of Parliament (MP) by the first past the post system of election. Prior to Brexit in 2020, it was part of the South West England constituency of the European Parliament.
International relations.
Cheddar is twinned with Felsberg, Germany and Vernouillet, France, and it has an active programme of exchange visits. Initially, Cheddar twinned with Felsberg in 1984. In 2000, Cheddar twinned with Vernouillet, which had also been twinned with Felsberg. Cheddar also has a friendship link with Ocho Rios in Saint Ann Parish, Jamaica.
It is also twinned with the commune of Descartes in the Indre-et-Loire department.
Geography.
The area is underlain by Black Rock slate, Burrington Oolite and Clifton Down Limestone of the Carboniferous Limestone Series, which contain ooliths and fossil debris on top of Old Red Sandstone, and by Dolomitic Conglomerate of the Keuper. Evidence for Variscan orogeny is seen in the sheared rock and cleaved shales. In many places weathering of these strata has resulted in the formation of immature calcareous soils.
Gorge and caves.
Cheddar Gorge, which is located on the edge of the village, is the largest gorge in the United Kingdom.
The gorge is the site of the Cheddar Caves, where Cheddar Man was found in 1903. Older remains from the Upper Late Palaeolithic era (12,000–13,000 years ago) have been found. The caves, produced by the activity of an underground river, contain stalactites and stalagmites. Gough's Cave, which was discovered in 1903, leads around into the rock-face, and contains a variety of large rock chambers and formations. Cox's Cave, discovered in 1837, is smaller but contains many intricate formations. A further cave houses a children's entertainment walk known as the "Crystal Quest".
Cheddar Gorge, including Cox's Cave, Gough's Cave and other attractions, has become a tourist destination, attracting about 500,000 visitors per year.
In a 2005 poll of "Radio Times" readers, following its appearance on the 2005 television programme "Seven Natural Wonders", Cheddar Gorge was named as the second greatest natural wonder in Britain, surpassed only by the Dan yr Ogof caves.
Sites of Special Scientific Interest.
There are several large and unique Sites of Special Scientific Interest (SSSI) around the village.
Cheddar Reservoir is a near-circular artificial reservoir operated by Bristol Water. Dating from the 1930s, it has a capacity of 135 million gallons (614,000 cubic metres). The reservoir is supplied with water taken from the Cheddar Yeo, which rises in Gough's Cave in Cheddar Gorge and is a tributary of the River Axe. The inlet grate for the water pipe that is used to transport the water can be seen next to the sensory garden in Cheddar Gorge. It has been designated as a Site of Special Scientific Interest (SSSI) due to its wintering waterfowl populations.
Cheddar Wood and the smaller Macall's Wood form a biological Site of Special Scientific Interest from what remains of the wood of the Bishops of Bath and Wells in the 13th century and of King Edmund the Magnificent's wood in the 10th. During the 19th century, its lower fringes were grubbed out to make strawberry fields. Most of these have been allowed to revert to woodland. The wood was coppiced until 1917. This site compromises a wide range of habitats which include ancient and secondary semi-natural broadleaved woodland, unimproved neutral grassland, and a complex mosaic of calcareous grassland and acidic dry dwarf-shrub heath. Cheddar Wood is one of only a few English stations for starved wood-sedge ("Carex depauperata"). Purple gromwell ("Lithospermum purpurocaeruleum"), a nationally rare plant, also grows in the wood. Butterflies include silver-washed fritillary ("Argynnis paphia"), dark green fritillary ("Argynnis aglaja"), pearl-bordered fritillary ("Boloria euphrosyne"), holly blue ("Celastrina argiolus") and brown argus ("Aricia agestis"). The slug "Arion fasciatus", which has a restricted distribution in the south of England, and the soldier beetle "Cantharis fusca" also occur.
By far the largest of the SSSIs is called Cheddar Complex and covers of the gorge, caves and the surrounding area. It is important because of both biological and geological features. It includes four SSSIs, formerly known as Cheddar Gorge SSSI, August Hole/Longwood Swallet SSSI, GB Cavern Charterhouse SSSI and Charterhouse on-Mendip SSSI. It is partly owned by the National Trust who acquired it in 1910 and partly managed by the Somerset Wildlife Trust.
Quarries.
Close to the village and gorge are Batts Combe quarry and Callow Rock quarry, two of the active Quarries of the Mendip Hills where limestone is still extracted. Operating since the early 20th century, Batts Combe is owned and operated by Hanson Aggregates. The output in 2005 was around 4,000 tonnes of limestone per day, one third of which was supplied to an on-site lime kiln, which closed in 2009; the remainder was sold as coated or dusted aggregates. The limestone at this site is close to 99 percent carbonate of calcium and magnesium (dolomite).
The Chelmscombe Quarry finished its work as a limestone quarry in the 1950s and was then used by the Central Electricity Generating Board as a tower testing station. During the 1970s and 1980s it was also used to test the ability of containers of radioactive material to withstand impacts and other accidents.
Climate.
Along with the rest of South West England, Cheddar has a temperate climate which is generally wetter and milder than the rest of the country. The annual mean temperature is approximately . Seasonal temperature variation is less extreme than most of the United Kingdom because of the adjacent sea, which moderates temperature. The summer months of July and August are the warmest with mean daily maxima of approximately . In winter mean minimum temperatures of are common. In the summer the Azores high-pressure system affects the south-west of England. Convective cloud sometimes forms inland, reducing the number of hours of sunshine; annual sunshine rates are slightly less than the regional average of 1,600 hours. Most of the rainfall in the south-west is caused by Atlantic depressions or by convection. Most of the rainfall in autumn and winter is caused by the Atlantic depressions, which are most active during those seasons. In summer, a large proportion of the rainfall is caused by sun heating the ground leading to convection and to showers and thunderstorms. Average rainfall is around . About 8–15 days of snowfall per year is typical. November to March have the highest mean wind speeds, and June to August have the lightest winds. The predominant wind direction is from the south-west.
Demography.
The parish has a population in 2011 of 5,093, with a mean age of 43 years. Residents lived in 2,209 households. The vast majority of households (2,183) gave their ethnic status at the 2001 census as white.
2021 census.
According to the most recent 2021 census, the village had a total population of 6,263 with 51.1% female and 48.9% male.
Over 6,101 people or 97.3% identified as white, 1% (61) Asian, 0.3% (17) Black and 1.3% (79) as mixed.
The most common places of birth were: 94.1% or 5,900 born in the United Kingdom and 2.5% (156) born in the European Union, 81 Africa and 65 Middle East and Asia, 29 Americas and Caribbean.
Economy.
The village gave its name to Cheddar cheese, which is the most popular type of cheese in the United Kingdom. The cheese is now made and consumed worldwide, and only one producer remains in the village.
Since the 1880s, Cheddar's other main produce has been the strawberry, which is grown on the south-facing lower slopes of the Mendip hills. As a consequence of its use for transporting strawberries to market, the since-closed Cheddar Valley line became known as "The Strawberry Line" after it opened in 1869.
The line ran from Yatton to Wells. When the rest of the line was closed and all passenger services ceased, the section of the line between Cheddar and Yatton remained open for goods traffic. It provided a fast link with the main markets for the strawberries in Birmingham and London, but finally closed in 1964, becoming part of the Cheddar Valley Railway Nature Reserve.
Cheddar Ales is a small brewery based in the village, producing beer for local public houses.
Tourism is a significant source of employment. Around 15 percent of employment in Sedgemoor is provided by tourism, but within Cheddar it is estimated to employ as many as 1,000 people.
The village also has a youth hostel, and a number of camping and caravan sites.
Culture and community.
Cheddar has a number of active service clubs including Cheddar Vale Lions Club, Mendip Rotary and Mendip Inner Wheel Club. The clubs raise money for projects in the local community and hold annual events such as a fireworks display, duck races in the Gorge, a dragon boat race on the reservoir and concerts on the grounds of the nearby St Michael's Cheshire Home.
Several notable people have been born or lived in Cheddar. Musician Jack Bessant, the bass guitarist with the band Reef grew up on his parents' strawberry farm, and Matt Goss and Luke Goss, former members of Bros, lived in Cheddar for nine months as children. Trina Gulliver, ten-time World Professional Darts Champion, previously lived in Cheddar until 2017. The comedian Richard Herring grew up in Cheddar. His 2008 Edinburgh Festival Fringe show, "The Headmaster's Son" is based on his time at The Kings of Wessex School, where his father Keith was the headmaster. The final performance of this show was held at the school in November 2009. He also visited the school in March 2010 to perform his show "Hitler Moustache". In May 2013, a community radio station called Pulse was launched.
Landmarks.
The market cross in Bath Street dates from the 15th century, with the shelter having been rebuilt in 1834. It has a central octagonal pier, a socket raised on four steps, a hexagonal shelter with six arched four-centred openings, shallow two-stage buttresses at each angle, and an embattled parapet. The shaft is crowned by an abacus with figures in niches, probably from the late 19th century, although the cross is now missing. It was rebuilt by Thomas, Marquess of Bath. It is a scheduled monument (Somerset County No 21) and Grade II* listed building.
In January 2000, the cross was seriously damaged in a traffic accident. By 2002, the cross had been rebuilt and the area around it was redesigned to protect and enhance its appearance.
The cross was badly damaged again in March 2012, when a taxi crashed into it late at night demolishing two sides.
Repair work, which included the addition of wooden-clad steel posts to protect against future crashes, was completed in November 2012 at a cost of £60,000.
Hannah More, a philanthropist and educator, founded a school in the village in the late 18th century for the children of miners. Her first school was located in a 17th-century house. Now named "Hannah More's Cottage", the Grade II-listed building is used by the local community as a meeting place.
Transport.
The village is situated on the A371 road which runs from Wincanton, to Weston-super-Mare. It is approximately from the route of the M5 motorway with around a drive to junction 22.
It was on the Cheddar Valley line, a railway line that was opened in 1869 and closed in 1963. It became known as The Strawberry Line because of the large volume of locally-grown strawberries that it carried. It ran from Yatton railway station through to Wells (Tucker Street) railway station and joined the East Somerset Railway to make a through route via Shepton Mallet (High Street) railway station to Witham. Sections of the now-disused railway have been opened as the Strawberry Line Trail, which currently runs from Yatton to Cheddar. The Cheddar Valley line survived until the "Beeching Axe". Towards the end of its life there were so few passengers that diesel railcars were sometimes used. The Cheddar branch closed to passengers on 9 September 1963 and to goods in 1964. The line closed in the 1960s, when it became part of the Cheddar Valley Railway Nature Reserve, and part of the National Cycle Network route 26. The cycle route also intersects with the West Mendip Way and various other footpaths.
The principal bus route is the hourly service 126 between Weston-super-Mare and Wells operated by First West of England. Other bus routes include the service 668 from Shipham to Street which runs every couple of hours operated by Libra Travel, as well as the college bus service 66 which runs from Axbridge to the Bridgwater Campus of Bridgwater and Taunton College in the mornings and evenings of college term times, and is operated by Bakers Dolphin.
Education.
The first school in Cheddar was set up by Hannah More during the 18th Century, however now Cheddar has three schools belonging to the Cheddar Valley Group of Schools, twelve schools that provide Cheddar Valley's three-tier education system. Cheddar First School has ten classes for children between 4 and 9 years. Fairlands Middle School, a middle school categorised as a middle-deemed-secondary school, has 510 pupils between 9 and 13. Fairlands takes children moving up from Cheddar First School as well as other first schools in the Cheddar Valley. The Kings of Wessex Academy, a coeducational comprehensive school, has been rated as "good" by Ofsted. It has 1,176 students aged 13 to 18, including 333 in the sixth form. Kings is a faith school linked to the Church of England. It was awarded the specialist status of Technology College in 2001, enabling it to develop its Information Technology (IT) facilities and improve courses in science, mathematics and design technology. In 2007 it became a foundation school, giving it more control over its own finances. The academy owns and runs a sports centre and swimming pool, Kings Fitness & Leisure, with facilities that are used by students as well as residents. It has since November 2016 been a part of the Wessex Learning Trust which incorporates eight academies from the surrounding area.
Religious sites.
The Church of St Andrew dates from the 14th century. It was restored in 1873 by William Butterfield. It is a Grade I listed building and contains some 15th-century stained glass and an altar table of 1631. The chest tomb in the chancel is believed to contain the remains of Sir Thomas Cheddar and is dated 1442. The tower, which rises to , contains a bell dating from 1759 made by Thomas Bilbie of the Bilbie family. The graveyard contains the grave of the hymn writer William Chatterton Dix.
There are also churches for Roman Catholic, Methodist and other denominations, including Cheddar Valley Community Church, who not only meet at the Kings of Wessex School on Sunday, but also have their own site on Tweentown for meeting during the week. The Baptist chapel was built in 1831.
Sport.
Kings Fitness & Leisure, situated on the grounds of the Kings of Wessex School, provides a venue for various sports and includes a 20-metre swimming pool, racket sport courts, a sports hall, dance studios and a gym. A youth sports festival was held on Sharpham Road Playing Fields in 2009. In 2010 a skatepark was built in the village, funded by the Cheddar Local Action Team.
Cheddar A.F.C., founded in 1892 and nicknamed "The Cheesemen", play in the Western Football League Division One. In 2009 plans were revealed to move the club from its present home at Bowdens Park on Draycott Road to a new larger site.
Cheddar Cricket Club was formed in the late 19th century and moved to Sharpham Road Playing Fields in 1964. They now play in the West of England Premier League Somerset Division. Cheddar Rugby Club, who own part of the Sharpham playing fields, was formed in 1836. The club organises an annual Cheddar Rugby Tournament. Cheddar Lawn Tennis Club, was formed in 1924, and play in the North Somerset League and also has social tennis and coaching. Cheddar Running Club organised an annual half marathon until 2009.
The village is both on the route of the West Mendip Way and Samaritans Way South West.
|
6429
|
1301225288
|
https://en.wikipedia.org/wiki?curid=6429
|
Compact disc
|
The compact disc (CD) is a digital optical disc data storage format co-developed by Philips and Sony to store and play digital audio recordings. It employs the Compact Disc Digital Audio (CD-DA) standard and is capable of holding of uncompressed stereo audio. First released in Japan in October 1982, the CD was the second optical disc format to reach the market, following the larger LaserDisc (LD). In later years, the technology was adapted for computer data storage as CD-ROM and subsequently expanded into various writable and multimedia formats. , over 200 billion CDs (including audio CDs, CD-ROMs, and CD-Rs) had been sold worldwide.
Standard CDs have a diameter of and typically hold up to 74 minutes of audio or approximately of data. This was later regularly extended to 80 minutes or by reducing the spacing between data tracks, with some discs unofficially reaching up to 99 minutes or which falls outside established specifications. Smaller variants, such as the Mini CD, range from in diameter and have been used for CD singles or distributing device drivers and software.
The CD gained widespread popularity in the late 1980s and early 1990s. By 1991, it had surpassed the phonograph record and the cassette tape in sales in the United States, becoming the dominant physical audio format. By 2000, CDs accounted for 92.3% of the U.S. music market share. The CD is widely regarded as the final dominant format of the album era, before the rise of MP3, digital downloads, and streaming platforms in the mid-2000s led to its decline.
Beyond audio playback, the compact disc was adapted for general-purpose data storage under the CD-ROM format, which initially offered more capacity than contemporary personal computer hard disk drives. Additional derived formats include write-once discs (CD-R), rewritable media (CD-RW), and multimedia applications such as Video CD (VCD), Super Video CD (SVCD), Photo CD, Picture CD, Compact Disc Interactive (CD-i), Enhanced Music CD, and Super Audio CD (SACD), the latter of which can include a standard CD-DA layer for backward compatibility.
Physical details.
A CD is made from thick, polycarbonate plastic, and weighs 14–33 grams. From the center outward, components are: the center spindle hole (15 mm), the first-transition area (clamping ring), the clamping area (stacking ring), the second-transition area (mirror band), the program (data) area, and the rim. The inner program area occupies a radius from 25 to 58 mm.
A thin layer of aluminum or, more rarely, gold is applied to the surface, making it reflective. The metal is protected by a film of lacquer normally spin coated directly on the reflective layer. The label is printed on the lacquer layer, usually by screen printing or offset printing.
CD data is represented as tiny indentations known as "pits", encoded in a spiral track molded into the top of the polycarbonate layer. The areas between pits are known as "lands". Each pit is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 μm in length. The distance between the windings (the "pitch") is 1.6 μm (measured center-to-center, not between the edges).
When playing an audio CD, a motor within the CD player spins the disc to a scanning velocity of 1.2–1.4 m/s (constant linear velocity, CLV)—equivalent to approximately 500 RPM at the inside of the disc, and approximately 200 RPM at the outside edge. The track on the CD begins at the inside and spirals outward so a disc played from beginning to end slows its rotation rate during playback.
The program area is 86.05 cm2 and the length of the recordable spiral is With a scanning speed of 1.2 m/s, the playing time is 74 minutes or 650 MiB of data on a CD-ROM. A disc with data packed slightly more densely is tolerated by most players (though some old ones fail). Using a linear velocity of 1.2 m/s and a narrower track pitch of 1.5 μm increases the playing time to 80 minutes, and data capacity to 700 MiB. Even denser tracks are possible, with semi-standard 90 minute/800 MiB discs having 1.33 μm, and 99 minute/870 MiB having 1.26 μm, but compatibility suffers as density increases.
A CD is read by focusing a 780 nm wavelength (near infrared) semiconductor laser (early players used HeNe laser) through the bottom of the polycarbonate layer. The change in height between pits and lands results in a difference in the way the light is reflected. Because the pits are indented into the top layer of the disc and are read through the transparent polycarbonate base, the pits form bumps when read. The laser hits the disc, casting a circle of light wider than the modulated spiral track reflecting partially from the lands and partially from the top of any bumps where they are present. As the laser passes over a pit (bump), its height means that the round trip path of the light reflected from its peak is 1/2 wavelength out of phase with the light reflected from the land around it. This is because the height of a bump is around 1/4 of the wavelength of the light used, so the light falls 1/4 out of phase before reflection and another 1/4 wavelength out of phase after reflection. This causes partial cancellation of the laser's reflection from the surface. By measuring the reflected intensity change with a photodiode, a modulated signal is read back from the disc.
To accommodate the spiral pattern of data, the laser is placed on a mobile mechanism within the disc tray of any CD player. This mechanism typically takes the form of a sled that moves along a rail. The sled can be driven by a worm gear or linear motor. Where a worm gear is used, a second shorter-throw linear motor, in the form of a coil and magnet, makes fine position adjustments to track eccentricities in the disk at high speed. Some CD drives (particularly those manufactured by Philips during the 1980s and early 1990s) use a swing arm similar to that seen on a gramophone.
The pits and lands do "not" directly represent the 0s and 1s of binary data. Instead, non-return-to-zero, inverted encoding is used: a change from either pit to land or land to pit indicates a 1, while no change indicates a series of 0s. There must be at least two, and no more than ten 0s between each 1, which is defined by the length of the pit. This, in turn, is decoded by reversing the eight-to-fourteen modulation used in mastering the disc, and then reversing the cross-interleaved Reed–Solomon coding, finally revealing the raw data stored on the disc. These encoding techniques (defined in the "Red Book") were originally designed for CD Digital Audio, but they later became a standard for almost all CD formats (such as CD-ROM).
Integrity.
CDs are susceptible to damage during handling and from environmental exposure. Pits are much closer to the label side of a disc, enabling defects and contaminants on the clear side to be out of focus during playback. Consequently, CDs are more likely to suffer damage on the label side of the disc. Scratches on the clear side can be repaired by refilling them with similar refractive plastic or by careful polishing. The edges of CDs are sometimes incompletely sealed, allowing gases and liquids to enter the CD and corrode the metal reflective layer and/or interfere with the focus of the laser on the pits, a condition known as disc rot. The fungus "Geotrichum candidum" has been found—under conditions of high heat and humidity—to consume the polycarbonate plastic and aluminium found in CDs.
The data integrity of compact discs can be measured using surface error scanning, which can measure the rates of different types of data errors, known as "C1", "C2", "CU" and extended (finer-grain) error measurements known as "E11", "E12", "E21", "E22", "E31" and "E32", of which higher rates indicate a possibly damaged or unclean data surface, low media quality, deteriorating media and recordable media written to by a malfunctioning CD writer.
Error scanning can reliably predict data losses caused by media deterioration. Support of error scanning differs between vendors and models of optical disc drives, and "extended" error scanning (known as "advanced error scanning" in Nero DiscSpeed) which reports the six aforementioned E-type errors has only been available on Plextor and some BenQ optical drives so far, as of 2020.
Disc shapes and diameters.
The digital data on a CD begins at the center of the disc and proceeds toward the edge, which allows adaptation to the different sizes available. Standard CDs are available in two sizes. By far, the most common is in diameter, with a 74-, 80, 90, or 99-minute audio capacity and a 650, 700, 800, or 870 MiB (737,280,000-byte) data capacity. Discs are thick, with a center hole. The size of the hole was chosen by Joop Sinjou and based on a Dutch 10-cent coin: a dubbeltje. Philips/Sony patented the physical dimensions.
The official Philips history says the capacity was specified by Sony executive Norio Ohga to be able to contain the entirety of Beethoven's Ninth Symphony on one disc.
This is a myth according to Kees Immink, as the EFM code format had not yet been decided in December 1979, when the 120 mm size was adopted. The adoption of EFM in June 1980 allowed 30 percent more playing time that would have resulted in 97 minutes for 120 mm diameter or 74 minutes for a disc as small as . Instead, the information density was lowered by 30 percent to keep the playing time at 74 minutes. The 120 mm diameter has been adopted by subsequent formats, including Super Audio CD, DVD, HD DVD, and Blu-ray Disc. The diameter discs ("Mini CDs") can hold up to 24 minutes of music or 210 MiB.
SHM-CD.
SHM-CD (short for "Super High Material Compact Disc") is a variant of the Compact Disc, which replaces the polycarbonate base with a proprietary material. This material was created during joint research by Universal Music Japan and JVC into manufacturing high-clarity liquid-crystal displays.
SHM-CDs are fully compatible with all CD players since the difference in light refraction is not detected as an error. JVC claims that the greater fluidity and clarity of the material used for SHM-CDs results in a higher reading accuracy and improved sound quality. However, since the CD-Audio format contains inherent error correction, it is unclear whether a reduction in read errors would be great enough to produce an improved output.
Logical format.
Audio CD.
The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is described in a document produced in 1980 by the format's joint creators, Sony and Philips. The document is known colloquially as the "Red Book" CD-DA after the color of its cover. The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate per channel. Four-channel sound was to be an allowable option within the "Red Book" format, but has never been implemented. Monaural audio has no existing standard on a "Red Book" CD; thus, the mono source material is usually presented as two identical channels in a standard "Red Book" stereo track (i.e., mirrored mono); an MP3 CD, can have audio file formats with mono sound.
CD-Text is an extension of the "Red Book" specification for an audio CD that allows for the storage of additional text information (e.g., album name, song name, artist) on a standards-compliant audio CD. The information is stored either in the lead-in area of the CD, where there are roughly five kilobytes of space available or in the subcode channels R to W on the disc, which can store about 31 megabytes.
Compact Disc + Graphics is a special audio compact disc that contains graphics data in addition to the audio data on the disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, it can output a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. The CD+G format takes advantage of the channels R through W. These six bits store the graphics information.
CD + Extended Graphics (CD+EG, also known as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G, CD+EG uses basic CD-ROM features to display text and video information in addition to the music being played. This extra data is stored in subcode channels R-W. Very few CD+EG discs have been published.
Super Audio CD.
Super Audio CD (SACD) is a high-resolution, read-only optical audio disc format that was designed to provide higher-fidelity digital audio reproduction than the "Red Book". Introduced in 1999, it was developed by Sony and Philips, the same companies that created the "Red Book". SACD was in a format war with DVD-Audio, but neither has replaced audio CDs. The SACD standard is referred to as the "Scarlet Book" standard.
Titles in the SACD format can be issued as hybrid discs; these discs contain the SACD audio stream as well as a standard audio CD layer which is playable in standard CD players, thus making them backward compatible.
CD-MIDI.
CD-MIDI is a format used to store music-performance data, which upon playback is performed by electronic instruments that synthesize the audio. Hence, unlike the original "Red Book" CD-DA, these recordings are not digitally sampled audio recordings. The CD-MIDI format is defined as an extension of the original "Red Book".
CD-ROM.
For the first few years of its existence, the CD was a medium used purely for audio. In 1988, the "Yellow Book" CD-ROM standard was established by Sony and Philips, which defined a non-volatile optical data computer data storage medium using the same physical format as audio compact discs, readable by a computer with a CD-ROM drive.
Video CD.
Video CD (VCD, View CD, and Compact Disc digital video) is a standard digital format for storing video media on a CD. VCDs are playable in dedicated VCD players, most modern DVD-Video players, personal computers, and some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC and is referred to as the "White Book" standard.
Overall picture quality is intended to be comparable to VHS video. Poorly compressed VCD video can sometimes be of lower quality than VHS video, but VCD exhibits block artifacts rather than analog noise and does not deteriorate further with each use. 352×240 (or SIF) resolution was chosen because it is half the vertical and half the horizontal resolution of the NTSC video. 352×288 is a similarly one-quarter PAL/SECAM resolution. This approximates the (overall) resolution of an analog VHS tape, which, although it has double the number of (vertical) scan lines, has a much lower horizontal resolution.
Super Video CD.
Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video media on standard compact discs. SVCD was intended as a successor to VCD and an alternative to DVD-Video and falls somewhere between both in terms of technical capability and picture quality.
SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes of standard-quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification, one must lower the video bit rate, and therefore quality, to accommodate very long videos. It is usually difficult to fit much more than 100 minutes of video onto one SVCD without incurring a significant quality loss, and many hardware players are unable to play a video with an instantaneous bit rate lower than 300 to 600 kilobits per second.
Photo CD.
Photo CD is a system designed by Kodak for digitizing and storing photos on a CD. Launched in 1992, the discs were designed to hold nearly 100 high-quality images, scanned prints, and slides using special proprietary encoding. Photo CDs are defined in the "Beige Book" and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended to play on CD-i players, Photo CD players, and any computer with suitable software (irrespective of operating system). The images can also be printed out on photographic paper with a special Kodak machine. This format is not to be confused with Kodak Picture CD, which is a consumer product in CD-ROM format.
CD-i.
The Philips "Green Book" specifies a standard for interactive multimedia compact discs designed for CD-i players (1993). CD-i discs can contain audio tracks that can be played on regular CD players, but CD-i discs are not compatible with most CD-ROM drives and software. The CD-i Ready specification was later created to improve compatibility with audio CD players, and the CD-i Bridge specification was added to create CD-i-compatible discs that can be accessed by regular CD-ROM drives.
CD-i Ready.
Philips defined a format similar to CD-i called CD-i Ready, which puts CD-i software and data into the pregap of track 1. This format was supposed to be more compatible with older audio CD players.
Enhanced Music CD (CD+).
Enhanced Music CD, also known as CD Extra or CD Plus, is a format that combines audio tracks and data tracks on the same disc by putting audio tracks in a first session and data in a second session. It was developed by Philips and Sony, and it is defined in the "Blue Book".
VinylDisc.
VinylDisc is the hybrid of a standard audio CD and the vinyl record. The vinyl layer on the disc's label side can hold approximately three minutes of music.
Manufacture, cost, and pricing.
In 1995, material costs were 30 cents for the jewel case and 10 to 15 cents for the CD. The wholesale cost of CDs was $0.75 to $1.15, while the typical retail price of a prerecorded music CD was $16.98. On average, the store received 35 percent of the retail price, the record company 27 percent, the artist 16 percent, the manufacturer 13 percent, and the distributor 9 percent. When 8-track cartridges, compact cassettes, and CDs were introduced, each was marketed at a higher price than the format they succeeded, even though the cost to produce the media was reduced. This was done because the perceived value increased. This continued from phonograph records to CDs, but was broken when Apple marketed MP3s for $0.99, and albums for $9.99. The incremental cost, though, to produce an MP3 is negligible.
Writable compact discs.
Recordable CD.
Recordable Compact Discs, CD-Rs, are injection-molded with a blank data spiral. A photosensitive dye is then applied, after which the discs are metalized and lacquer-coated. The write laser of the CD recorder changes the color of the dye to allow the read laser of a standard CD player to see the data, just as it would with a standard stamped disc. The resulting discs can be read by most CD-ROM drives and played in most audio CD players. CD-Rs follow the "Orange Book" standard.
CD-R recordings are designed to be permanent. Over time, the dye's physical characteristics may change causing read errors and data loss until the reading device cannot recover with error correction methods. Errors can be predicted using surface error scanning. The design life is from 20 to 100 years, depending on the quality of the discs, the quality of the writing drive, and storage conditions. Testing has demonstrated such degradation of some discs in as little as 18 months under normal storage conditions. This failure is known as disc rot, for which there are several, mostly environmental, reasons.
The recordable audio CD is designed to be used in a consumer audio CD recorder. These consumer audio CD recorders use SCMS (Serial Copy Management System), an early form of digital rights management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable Audio CD is typically somewhat more expensive than CD-R due to lower production volume and a 3 percent AHRA royalty used to compensate the music industry for the making of a copy.
High-capacity recordable CD is a higher-density recording format that can hold 20% more data than conventional discs. The higher capacity is incompatible with some recorders and recording software.
ReWritable CD.
CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write laser, in this case, is used to heat and alter the properties (amorphous vs. crystalline) of the alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players cannot read CD-RW discs, although most later CD audio players and stand-alone DVD players can. CD-RWs follow the "Orange Book" standard.
The ReWritable Audio CD is designed to be used in a consumer audio CD recorder, which will not (without modification) accept standard CD-RW discs. These consumer audio CD recorders use the Serial Copy Management System (SCMS), an early form of digital rights management (DRM), to conform to the United States' Audio Home Recording Act (AHRA). The ReWritable Audio CD is typically somewhat more expensive than CD-R due to (a) lower volume and (b) a 3 percent AHRA royalty used to compensate the music industry for the making of a copy.
Copy protection.
The "Red Book" audio specification, except for a simple "anti-copy" statement in the subcode, does not include any copy protection mechanism. Known at least as early as 2001, attempts were made by record companies to market "copy-protected" non-standard compact discs, which cannot be ripped, or copied, to hard drives or easily converted to other formats (like FLAC, MP3 or Vorbis). One major drawback to these copy-protected discs is that most will not play on either computer CD-ROM drives or some standalone CD players that use CD-ROM mechanisms. Philips has stated that such discs are not permitted to bear the trademarked "Compact Disc Digital Audio" logo because they violate the "Red Book" specifications. Numerous copy-protection systems have been countered by readily available, often free, software, or even by simply turning off automatic AutoPlay to prevent the running of the DRM executable program.
|
6431
|
28481209
|
https://en.wikipedia.org/wiki?curid=6431
|
Charles Farrar Browne
|
Charles Farrar Browne (April 26, 1834 – March 6, 1867) was an American humor writer, better known under his "nom de plume", Artemus Ward, which as a character, an illiterate rube with "Yankee common sense", Browne also played in public performances. He is considered to be America's first stand-up comedian. His birth name was Brown but he added the "e" after he became famous.
Biography.
Browne was born on 26 April 1834,<ref name="ohiocenterforthebook/charles-farrar-browne"></ref> in Waterford, Maine to Caroline (née Farrar)<ref name="mainememory/8744"></ref> "a descendant of the first Puritans" and Levi Brown,<ref name="case/ech/browne-c-f"></ref> who "operated a store in Waterford, engaged in farming and did some surveying",<ref name="maineanencyclopedia/charles-f-browne/"></ref> and was a justice of the peace.<ref name="NEhs/artemus-ward"></ref>
He began his career at the age of fourteen, "learned the printer's trade"<ref name="nytimes/old-friends-reminiscences"></ref> at "The Advertiser" in Norway, Maine, and later apprenticed in the printing office of "The Skowhegan Clarion",<ref name="cathen/02804b"></ref> Skowhegan, Maine, then, as a compositor and occasional contributor to the daily and weekly journals. In 1858, in "The Plain Dealer" newspaper (Cleveland, Ohio), he published the first of the "Artemus Ward" series ("a barely literate circus sideshow manager who toured the country and wrote about the people and events he saw."<ref name="pressherald/2019/04/24"></ref> "loosely based on P.T. Barnum"<ref name="biography/a43468479"></ref>), which, in collected form, achieved great popularity in both America and England.
Browne's companion at the "Plain Dealer", George Hoyt, wrote:
"his desk was a rickety table which had been whittled and gashed until it looked as if it had been the victim of lightning. His chair was a fit companion thereto, a wabbling, unsteady affair, sometimes with four and sometimes with three legs. But Browne saw neither the table, nor the chair, nor any person who might be near, nothing, in fact, but the funny pictures which were tumbling out of his brain. When writing, his gaunt form looked ridiculous enough. One leg hung over the arm of his chair like a great hook, while he would write away, sometimes laughing to himself, and then slapping the table in the excess of his mirth."
In 1860, he became editor of the first "Vanity Fair", a humorous New York weekly that failed in 1863. At about the same time, he began to appear as a lecturer who, by his droll and eccentric humor, attracted large audiences. Browne was also known as a member of the New York bohemian set which included leader Henry Clapp Jr., Walt Whitman, Fitz Hugh Ludlow, and actress Adah Isaacs Menken.
Though his books were popular, it was his lecturing, delivered with deadpan expression, that brought him fame.<ref name="Britannica/Artemus-Ward"></ref>
In 1863, Browne came to San Francisco to perform as Artemus Ward. An early expert at show business publicity, Browne sent his manager ahead by several weeks to buy advertising in the local papers and promote the show among prominent citizens for endorsements. On November 13, 1863, Browne stood before a packed crowd at Platt's Music Hall, playing the part of Artemus Ward as an illiterate rube but with "Yankee common sense." Writer Bret Harte was in the audience that night and he described it in "the Golden Era" as capturing American speech: "humor that belongs to the country of boundless prairies, limitless rivers, and stupendous cataracts—that fun which overlies the surface of our national life, which is met in the stage, rail-car, canal and flat-boat, which bursts out over camp-fires and around bar-room stoves."
"Artemus Ward" was a favorite author of U.S. President Abraham Lincoln. Before presenting "The Emancipation Proclamation" to his Cabinet, Lincoln read to them the latest episode, "Outrage in Utiky", also known as "High-Handed Outrage at Utica".
When Browne performed in Virginia City, Nevada, he met Mark Twain and the two became friends. In his correspondence with Twain, Browne called him "My Dearest Love." Legend has it that, following a stage performance there, Browne, Twain, and Dan De Quille were trekking on a (drunken) rooftop tour of Virginia City until a town constable threatened to blast all three with a shotgun loaded with rock salt. Browne recommended Twain to the editors of the "New York Press" and urged him to journey to New York.
In 1866, Browne visited England and attracted a large following to his playing Artemus Ward, both as lecturer and for his literary contributions to "Punch". But within a year his health gave way and he died of tuberculosis at Southampton on March 6, 1867.
In England, Browne was buried at Kensal Green Cemetery, but his remains were removed to the United States in 1868 and buried at Elm Vale Cemetery<ref name="mainememory/8733"></ref> in Waterford, Maine.
Legacy.
In Cleveland, where Browne started his comedy career, an elementary school is named after him, known as Artemus Ward Elementary on W. 140th Street. In the American Garden of the Cleveland Cultural Gardens in Rockefeller Park, a monument of him was erected, next to Mark Twain.
External links.
Works
Biography
Media
|
6432
|
42021989
|
https://en.wikipedia.org/wiki?curid=6432
|
Caelum
|
Caelum is a faint constellation in the southern sky, introduced in the 1750s by Nicolas Louis de Lacaille and counted among the 88 modern constellations. Its name means "chisel" in Latin, and it was formerly known as Caelum Sculptorium ("Engraver's Chisel"); it is a rare word, unrelated to the far more common Latin "caelum", meaning "sky", "heaven", or "atmosphere". It is the eighth-smallest constellation, and subtends a solid angle of around 0.038 steradians, just less than that of Corona Australis.
Due to its small size and location away from the plane of the Milky Way, Caelum is a rather barren constellation, with few objects of interest. The constellation's brightest star, Alpha Caeli, is only of magnitude 4.45, and only one other star, (Gamma) γ1 Caeli, is brighter than magnitude 5 . Other notable objects in Caelum are RR Caeli, a binary star with one known planet approximately away; X Caeli, a Delta Scuti variable that forms an optical double with γ1 Caeli; and HE0450-2958, a Seyfert galaxy that at first appeared as just a jet, with no host galaxy visible.
History.
Caelum was incepted as one of fourteen southern constellations in the 18th century by Nicolas Louis de Lacaille, a French astronomer and celebrated of the Age of Enlightenment.
It retains its name "Burin" among French speakers, Latinized in his catalogue of 1763 as "Caelum Sculptoris" (“"Engraver's Chisel"”).
Francis Baily shortened this name to "Caelum", as suggested by John Herschel. In Lacaille's original chart, it was shown as a pair of engraver's tools: a standard burin and more specific shape-forming échoppe tied by a ribbon, but came to be ascribed a simple chisel. Johann Elert Bode stated the name as plural with a singular possessor, "Caela Scalptoris" – in German ("die" ) "Grabstichel" (“"the Engraver’s Chisels"”) – but this did not stick.
Characteristics.
Caelum is bordered by Dorado and Pictor to the south, Horologium and Eridanus to the east, Lepus to the north, and Columba to the west. Covering only 125 square degrees, it ranks 81st of the 88 modern constellations in size.
Its main asterism consists of four stars, and twenty stars in total are brighter than magnitude 6.5 .
The constellation's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are a 12-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and and declinations of to . The International Astronomical Union (IAU) adopted the three-letter abbreviation “Cae” for the constellation in 1922.
Its main stars are visible in favourable conditions and with a clear southern horizon, for part of the year as far as about the 41st parallel north
These stars avoid being engulfed by daylight for some of every day (when above the horizon) to viewers in mid- and well-inhabited higher latitudes of the Southern Hemisphere. Caelum shares with (to the north) Taurus, Eridanus and Orion midnight culmination in December (high summer), resulting in this fact. In winter (such as June) the constellation can be observed sufficiently inset from the horizons during its rising before dawn and/or setting after dusk as it culminates then at around mid-day, well above the sun. In South Africa, Argentina, their sub-tropical neighbouring areas and some of Australia in high June the key stars may be traced before dawn in the east; near the equator the stars lose night potential in May to June; they ill-compete with the Sun in northern tropics and sub-tropics from late February to mid-September with March being unfavorable as to post-sunset due to the light of the Milky Way.
Notable features.
Stars.
Caelum is a faint constellation: It has no star brighter than magnitude 4 and only two stars brighter than magnitude 5.
Lacaille gave six stars Bayer designations, labeling them Alpha (α ) to Zeta (ζ ) in 1756, but omitted Epsilon (ε ) and designated two adjacent stars as Gamma (γ ). Bode extended the designations to Rho (ρ ) for other stars, but most of these have fallen out of use. Caelum is too far south for any of its stars to bear Flamsteed designations.
The brightest star, (Alpha) α Caeli, is a double star, containing an F-type main-sequence star of magnitude 4.45 and a red dwarf of magnitude 12.5 , from Earth. (Beta) β Caeli, another F-type star of magnitude 5.05 , is further away, being located from Earth. Unlike α, β Caeli is a subgiant star, slightly evolved from the main sequence. (Delta) δ Caeli, also of magnitude 5.05 , is a B-type subgiant and is much farther from Earth, at .
(Gamma) γ1Caeli is a double-star with a red giant primary of magnitude 4.58 and a secondary of magnitude 8.1 . The primary is from Earth. The two components are difficult to resolve with small amateur telescopes because of their difference in visual magnitude and their close separation. This star system forms an optical double with the unrelated X Caeli (previously named γ2Caeli), a Delta Scuti variable located from Earth. These are a class of short-period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. The only other variable star in Caelum visible to the naked eye is RV Caeli, a pulsating red giant of spectral type M1III, which varies between magnitudes 6.44 and 6.56 .
Three other stars in Caelum are still occasionally referred to by their Bayer designations, although they are only on the edge of naked-eye visibility. (Nu) ν Caeli is another double star, containing a white giant of magnitude 6.07 and a star of magnitude 10.66, with unknown spectral type. The system is approximately away. (Lambda) λ Caeli, at magnitude 6.24, is much redder and farther away, being a red giant around from Earth. (Zeta) ζ Caeli is even fainter, being only of magnitude 6.36 . This star, located away, is a K-type subgiant of spectral type K1. The other twelve naked-eye stars in Caelum are not referred to by Bode's Bayer designations anymore, including RV Caeli.
One of the nearest stars in Caelum is the eclipsing binary star RR Caeli, at a distance of . This star system consists of a dim red dwarf and a white dwarf. Despite its closeness to the Earth, the system's apparent magnitude is only 14.40 due to the faintness of its components, and thus it cannot be easily seen with amateur equipment. The system is a post-common-envelope binary and is losing angular momentum over time, which will eventually cause mass transfer from the red dwarf to the white dwarf. In approximately 9–20 billion years, this will cause the system to become a cataclysmic variable. In 2012, the system was found to contain a giant planet, and there is evidence for a second substellar body. , it is believed two planets orbit RR Caeli.
Another nearby star is LHS 1678, an astrometric binary located some 65 light-years away. The primary star is a red dwarf hosting three close-in exoplanets, all smaller than Earth, the secondary component is a likely brown dwarf. This system is notable as the closest star to Alpha Caeli, just 3.3 light-years distant. Due to its closeness, α Caeli would shine at magnitude from LHS 1678, brighter than Sirius in our sky.
Deep-sky objects.
Due to its small size and location away from the plane of the Milky Way, Caelum is rather devoid of deep-sky objects, and contains no Messier objects. The only deep-sky object in Caelum to receive much attention is HE0450-2958, an unusual Seyfert galaxy. Originally, the jet's host galaxy proved elusive to find, and this jet appeared to be emanating from nothing. Although it has been suggested that the object is an ejected supermassive black hole, the host is now agreed to be a small galaxy that is difficult to see due to light from the jet and a nearby starburst galaxy.
The 13th magnitude planetary nebula PN G243-37.1 is also in the eastern regions of the constellation. It is one of only a few planetary nebulae found in the galactic halo, being light-years below the Milky Way's 1000 light-year-thick disk.
Galaxies NGC 1595, NGC 1598, and the Carafe galaxy are known as the Carafe group. The Carafe galaxy is a Seyfert galaxy with ring. Its location is 4:28 / -47°54' (2000.0).
|
6433
|
41126950
|
https://en.wikipedia.org/wiki?curid=6433
|
Clarinet
|
The clarinet is a single-reed musical instrument in the woodwind family, with a nearly cylindrical bore and a flared bell.
Clarinets comprise a family of instruments of differing sizes and pitches. The clarinet family is the largest woodwind family, ranging from the BB♭ contrabass to the A♭ piccolo. The B soprano clarinet is the most common type, and is the instrument usually indicated by the word "clarinet".
German instrument maker Johann Christoph Denner is generally credited with inventing the clarinet sometime around 1700 by adding a register key to the chalumeau, an earlier single-reed instrument. Over time, additional keywork and airtight pads were added to improve the tone and playability. Today the clarinet is a standard fixture of the orchestra and concert band and is used in classical music, military bands, klezmer, jazz, and other styles.
Etymology.
The word "clarinet" may have entered the English language via the French (the feminine diminutive of Old French ), or from Provençal , originating from the Latin root . The word is related to Middle English , a type of trumpet, the name of which derives from the same root.
The earliest mention of the word "clarinette" being used for the instrument dates to a 1710 order placed by the Duke of Gronsfeld for two instruments made by Jacob Denner. The English form "clarinet" is found as early as 1733, and the now-archaic "clarionet" appears from 1784 until the early 20th century.
A person who plays the clarinet is called a "clarinetist" (in North American English), a "clarinettist" (in British English), or simply a clarinet player.
Development.
The modern clarinet developed from a Baroque instrument called the chalumeau. This instrument was similar to a recorder, but with a single-reed mouthpiece and a cylindrical bore. Lacking a register key, it was played mainly in its fundamental register, with a limited range of about one and a half octaves. It had eight finger holes, like a recorder, and a written pitch range from F3 to G4. At this time, contrary to modern practice, the reed was placed in contact with the upper lip.
Around the beginning of the 18th century the German instrument maker Johann Christoph Denner (or possibly his son Jacob Denner) equipped a chalumeau in the alto register with two keys, one of which enabled access to a higher register. This second register did not begin an octave above the first, as with other woodwind instruments, but started an octave and a perfect fifth higher than the first. A second key, at the top, extended the range of the first register to A4 and, together with the register key, to B4. Later, Denner lengthened the bell and provided it with a third key to extend the pitch range down to E3.
After Denner's innovations, other makers added keys to improve tuning and facilitate fingerings and the chalumeau fell into disuse. The clarinet of the Classical period, as used by Mozart, typically had five keys. Mozart suggested extending the clarinet downwards by four semitones to C, which resulted in the basset clarinet that was about longer, made first by Theodor Lotz. In 1791 Mozart composed the Concerto for Clarinet and Orchestra in A major for this instrument, with passages ranging down to C3. By the time of Beethoven (), the clarinet was a fixed member in the orchestra.
The number of keys was limited because their felt pads did not seal tightly. Iwan Müller invented the stuffed pad, originally made of kid leather. These in combination with countersunk tone holes sealed the keyholes sufficiently to permit the use of an increased number of keys. In 1812 Müller presented a clarinet with seven finger holes and thirteen keys, which he called "clarinet omnitonic" since it was capable of playing in all keys. It was no longer necessary to use differently tuned clarinets for a different keys. Müller is also considered the inventor of the metal ligature and the thumb rest. During this period the typical embouchure also changed, orienting the mouthpiece with the reed facing downward. This was first recommended in 1782 and became standard by the 1830s.
In the late 1830s, German flute maker Theobald Böhm invented a ring and axle key system for the flute. This key system was first used on the clarinet between 1839 and 1843 by French clarinetist Hyacinthe Klosé in collaboration with instrument maker Louis Auguste Buffet. Their design introduced needle springs for the axles, and the ring keys simplified some complicated fingering patterns. The inventors called this the Boehm clarinet, although Böhm was not involved in its development and the system differed from the one used on the flute. Other key systems have been developed, many built around modifications to the basic Boehm system, including the Full Boehm, Mazzeo, McIntyre, the Benade NX, and the Reform Boehm system, which combined Boehm-system keywork with a German mouthpiece and bore.
The Albert clarinet was developed by Eugène Albert in 1848. This model was based on the Müller clarinet with some changes to keywork, and was also known as the "simple system". It included a "spectacle key" patented by Adolphe Sax and rollers to improve little-finger movement. After 1861, a "patent C sharp" key developed by Joseph Tyler was added to other clarinet models. Improved versions of Albert clarinets were built in Belgium and France for export to the UK and the US.
Around 1860, clarinettist Carl Baermann and instrument maker Georg Ottensteiner developed the patented Baermann/Ottensteiner clarinet. This instrument had new connecting levers, allowing multiple fingering options to operate some of the pads. In the early 20th century, the German clarinetist and clarinet maker Oskar Oehler presented a clarinet using similar fingerings to the Baermann instrument, with significantly more toneholes than the Böhm model. The new clarinet was called the Oehler system clarinet or German clarinet, while the Böhm clarinet has since been called the French clarinet. The French clarinet differs from the German not only in fingering but also in sound. Richard Strauss noted that "French clarinets have a flat, nasal tone, while German ones approximate the singing voice". Among modern instruments the difference is smaller, although intonation differences persist. The use of Oehler clarinets has continued in German and Austrian orchestras.
Today the Boehm system is standard everywhere except in Germany and Austria, where the Oehler clarinet is still used. Some contemporary Dixieland players continue to use Albert system clarinets. The Reform Boehm system is also popular in the Netherlands.
Acoustics.
The clarinet's cylindrical bore is the main reason for its distinctive timbre, which varies between the three main registers (the "chalumeau", "clarion", and "altissimo"). The A and B clarinets have nearly the same bore and nearly identical tonal quality, although the A typically has a slightly warmer sound. The tone of the E clarinet is brighter and can be heard through loud orchestral textures. The bass clarinet has a characteristically deep, mellow sound, and the alto clarinet sounds similar to the bass, though not as dark.
The production of sound by a clarinet follows these steps:
In addition to this primary compression wave, other waves, known as harmonics, are created. Harmonics are caused by factors including the imperfect wobbling and shaking of the reed, the reed sealing the mouthpiece opening for part of the wave cycle (which creates a flattened section of the sound wave), and imperfections (bumps and holes) in the bore. A wide variety of compression waves are created, but only some (primarily the odd harmonics) are reinforced. This in combination with the cut-off frequency (where a significant drop in resonance occurs) results in the characteristic tone of the clarinet.
The bore is cylindrical for most of the tube with an inner bore diameter between , but there is a subtle hourglass shape, with the thinnest part below the junction between the upper and lower joint. This hourglass shape, although invisible to the naked eye, helps to correct the pitch and responsiveness of the instrument. The diameter of the bore affects the instrument's sound characteristics. The bell at the bottom of the clarinet flares out to improve the tone and tuning of the lowest notes. Modern standard clarinets are tuned to 440 to 442 Hz—concert pitch is 440 Hz—but adjusting the length of the bore can alter tuning, for example to match the pitch of a larger ensemble. Other factors that impact tuning include temperature and dynamics.
Most modern clarinets have "undercut" tone holes that improve intonation and sound. Undercutting means chamfering the bottom edge of tone holes inside the bore. Acoustically, this makes the tone hole function as if it were larger, but its main function is to allow the air column to follow the curve up through the tone hole (surface tension) instead of "blowing past" it under the increasingly directional frequencies of the upper registers. Covering or uncovering the tone holes varies the length of the pipe, changing the resonant frequencies of the enclosed air column and hence the pitch. The player moves between the chalumeau and clarion registers through use of the register key. The open register key stops the fundamental frequency from being reinforced, making the reed vibrate at three times the frequency, which produces a note a twelfth above the original note.
The fixed reed and fairly uniform diameter of the clarinet result in an acoustical performance approximating that of a cylindrical stopped pipe. Recorders use a tapered internal bore to overblow at the octave when the thumb/register hole is pinched open, while the clarinet, with its cylindrical bore, overblows at the twelfth. The low chalumeau register plays fundamentals, but the clarion (second) register plays the third harmonics, a perfect twelfth higher than the fundamentals. The first several notes of the altissimo (third) range, aided by the register key and venting with the first left-hand hole, play the fifth harmonics, a perfect twelfth plus a major sixth above the fundamentals. The fifth and seventh harmonics are also available, sounding a further sixth and fourth (a flat, diminished fifth) higher respectively; these are the notes of the altissimo register.
The lip position and pressure, shaping of the vocal tract, choice of reed and mouthpiece, amount of air pressure created, and evenness of the airflow account for most of the player's ability to control the tone of a clarinet. Their vocal tract will be shaped to resonate at frequencies associated with the tone being produced.
Vibrato, a pulsating change of pitch, is rare in classical literature; however, certain performers, such as Richard Stoltzman, use vibrato in classical music. Other effects are glissando, growling, trumpet sounds, double tongue, flutter tongue and circular breathing. Special lip-bending may be used to play microtonal intervals. There have also been efforts to create a quarter tone clarinet.
Construction.
Materials.
Clarinet bodies have been made from a variety of materials including wood, plastic, hard rubber or Ebonite, metal, and ivory. The vast majority of wooden clarinets are made from African blackwood (grenadilla), or, more uncommonly, Honduran rosewood or cocobolo. Historically other woods, particularly boxwood and ebony, were used. Since the mid-20th century, clarinets (particularly student or band models) are also made from plastics, such as acrylonitrile butadiene styrene (ABS). One of the first such blends of plastic was Resonite, a term originally trademarked by Selmer. The Greenline model by Buffet Crampon is made from a composite of resin and the African blackwood powder left over from the manufacture of wooden clarinets.
Metal soprano clarinets were popular in the late 19th century, particularly for military use. Metal is still used for the bodies of some contra-alto and contrabass clarinets and the necks and bells of nearly all alto and larger clarinets.
Mouthpieces are generally made of hard rubber, although some inexpensive mouthpieces may be made of plastic. Other materials such as glass, wood, ivory, and metal have also been used. Ligatures are often made of metal and tightened using one or more adjustment screws; other materials include plastic, string, or fabric.
Reed.
The clarinet uses a single reed made from the cane of "Arundo donax". Reeds may also be manufactured from synthetic materials. The ligature fastens the reed to the mouthpiece. When air is blown through the opening between the reed and the mouthpiece facing, the reed vibrates and produces the clarinet's sound.
Most players buy manufactured reeds, although many make adjustments to these reeds, and some make their own reeds from cane "blanks". Reeds come in varying degrees of hardness, generally indicated on a scale from one (soft) through five (hard). This numbering system is not standardized—reeds with the same number often vary in hardness across manufacturers and models. Reed and mouthpiece characteristics work together to determine ease of playability and tonal characteristics.
Components.
The reed is attached to the mouthpiece by the ligature, and the top half-inch or so of this assembly is held in the player's mouth. In the past, string was used to bind the reed to the mouthpiece. The formation of the mouth around the mouthpiece and reed is called the embouchure. The reed is on the underside of the mouthpiece, pressing against the player's lower lip, while the top teeth normally contact the top of the mouthpiece (some players roll the upper lip under the top teeth to form what is called a 'double-lip' embouchure). Adjustments in the strength and shape of the embouchure change the tone and intonation. Players sometimes relieve the pressure on the upper teeth and inner lower lip by attaching a pad to the top of the mouthpiece or putting temporary cushioning on the lower teeth.
The mouthpiece attaches to the barrel. Tuning can be adjusted by using barrels of varying lengths or by pulling out the barrel to increase the instrument's length. On basset horns and lower clarinets, there is a curved metal neck instead of a barrel.
The main body of most clarinets has an upper joint, whose mechanism is mostly operated by the left hand, and a lower joint, mostly operated by the right hand. Some clarinets have a one-piece body. The modern soprano clarinet has numerous tone holes—seven are covered with the fingertips and the rest are operated using a set of 17 keys. The most common system of keys was named the Boehm system by its designer Hyacinthe Klosé after flute designer Theobald Boehm, but it is not the same as the Boehm system used on flutes. The other main key system is the Oehler system, which is used mostly in Germany and Austria. The related Albert system is used by some jazz, klezmer, and eastern European folk musicians. The Albert and Oehler systems are both based on the early Mueller system.
The cluster of keys at the bottom of the upper joint (protruding slightly beyond the cork of the joint) are known as the trill keys and are operated by the right hand. The entire weight of the smaller clarinets is supported by the right thumb behind the lower joint on what is called the thumb rest. Larger clarinets are supported with a neck strap or a floor peg.
Below the main body is a flared end known as the bell. The bell does not amplify the sound but improves the uniformity of the instrument's tone for the lowest notes in each register. For the other notes, the sound is produced almost entirely at the tone holes, and the bell is irrelevant. On basset horns and larger clarinets, the bell curves up and forward and is usually made of metal.
In the 1930s, some clarinets were manufactured with (filled) plateau keys, but they were expensive and had issues with sound quality. They were designed for use in cold weather (allowing gloves to be worn), for saxophone or flute players, and for players with certain physical requirements.
Clarinet family and ranges.
Clarinets have the largest pitch range of common woodwinds. The range of a clarinet is usually divided into three registers. The low chalumeau register extends from the notated E3 (C3 if available) to the notated B4. The middle clarion register covers a little more than an octave (from the written B4 to C6). The high altissimo register consists of the notes above it. The three registers have characteristically different sounds: the chalumeau is full and dark, the clarion register is brighter and sweet, like a high trumpet from a distance, and the altissimo can be piercing and sometimes shrill.
Initially only C clarinets were available, but soon clarinets in B and A and the basset horn in F and G were developed. From the 19th century to the middle of the 20th century, an extensive family of clarinets developed, from high A to subcontrabass. Apart from the clarinets tuned in C (C soprano clarinet and basset clarinet in C), all clarinets are transposing instruments. The instruments above the C clarinet sound higher than notated, such as the aforementioned A clarinet a sixth higher, the longer instruments sound lower, such as the B clarinet by one tone and the B contrabass clarinet by two octaves and one tone.
Performance practice.
The modern orchestra frequently includes two clarinetists, each usually equipped with a B and an A clarinet, and clarinet parts commonly alternate between the instruments. The standard of using soprano clarinets in B and A has to do partly with the history of the instrument and partly with acoustics and aesthetics. Before about 1800, due to the lack of airtight pads, practical woodwinds could have only a few keys. The low (chalumeau) register of the clarinet spans a twelfth (an octave plus a perfect fifth) before overblowing, so the clarinet needs keys/holes to produce all nineteen notes in this range. This involves more keywork than on instruments that "overblow" at the octave—oboes, flutes, bassoons, and saxophones need only twelve notes before overblowing. Since clarinets with few keys cannot play chromatically, they are limited to playing in closely related keys. With the advent of airtight pads and improved key technology, more keys were added to woodwinds and the need for clarinets in multiple keys was reduced. The use of instruments in C, B, and A persisted, with each used as specified by the composer.
The lower-pitched clarinets sound "mellower" (less bright), and the C clarinet—the highest and brightest sounding of these three—fell out of favor as the other two could cover its range and their sound was considered better. While the clarinet in C began to fall out of general use around 1850, some composers continued to write C parts. Others employed many different clarinets, including the E or D soprano clarinets, basset horn, bass clarinet, and contrabass clarinet. The practice of using different clarinets to achieve tonal variety was common in 20th-century classical music. While technical improvements and an equal-tempered scale reduced the need for two clarinets, the technical difficulty of playing in remote keys persisted, and the A has remained a standard orchestral instrument.
Common combinations involving clarinet in chamber music are:
The A clarinet, B clarinet, alto clarinet, bass clarinet, and contra-alto/contrabass clarinet are commonly used in concert bands, which generally have multiple B clarinets; there are commonly three or even four B clarinet parts with two to three players per part. The clarinet is also used in military bands; author Eric Hoeprich suggests that "it was the role of the clarinet in the military band... that ultimately provided the key to its future popularity", since it was particularly suited to the ensemble.
Clarinet choir contains many clarinets playing together, usually including several members of the clarinet family. This ensemble first emerged in 1927. The homogeneity of tone across the different members of the clarinet family produces an effect with some similarities to a human choir. Parts for non-clarinets, such as voice or French horn, are sometimes included in the repertoire.
Repertoire.
Classical.
The clarinet evolved later than other orchestral woodwind instruments, leaving solo repertoire from the Classical period onward, but few works from the Baroque era. Examples of the first uses of clarinets include Vivaldi's 1716 oratorio "Juditha triumphans" with two C clarinets, and Handel's 1740 "Ouverture" for two clarinets and horn. In the 1750s, clarinets were introduced in the orchestra of La Pouplinière in Paris. Johann Stamitz composed the first known concerto for B clarinet for the principal clarinetist of this orchestra. Johann Melchior Molter wrote six clarinet concertos for clarinet in D, the first dated to around 1742.
Clarinets appeared in the Mannheim orchestra under Stamitz and in other orchestras from 1758, but were not commonly used before the 19th century. "Harmonie" wind ensembles including clarinets were common from the mid-18th century. Classical composers of solo or duo concertos for this instrument included Karl Stamitz and František Xaver Pokorný. The first clarinet sonata was written in 1770 by the Neapolitan composer Gregorio Sciroli.
Wolfgang Amadeus Mozart first used the clarinet in 1771 in his Divertimento K. 113 and later in the "Paris Symphony" of 1778. From "Idomeneo" onward, the clarinet appeared in all his operas, as well as in his symphonies and piano concertos. His chamber works for clarinet include the "Gran Partita", the Clarinet Quintet, and the Kegelstatt Trio. The latter two works were written for his friend, virtuoso Anton Stadler, as was his Clarinet Concerto. Beethoven's chamber music highlights the instrument, particularly in the Quintet Op. 16, the Septet Op. 20 and Trio Op. 38.
While the Classical period often used the clarinet, the Romantic era incorporated it more as an integral part of the orchestra. The clarinet became a staple, with composers such as Schubert, Mendelssohn, Berlioz, Dvořák, Smetana, Brahms, Tchaikovsky, and Rimsky-Korsakov writing prominent clarinet passages in their orchestral works. In Romantic opera orchestration, the clarinet frequently takes on expressive, lyrical roles. The clarinet section expanded to three or more players, with some performing on auxiliary instruments such as the bass clarinet. Certain operas, such as Strauss's "Elektra", require up to eight players.
Chamber music featuring the clarinet became increasingly diverse. The instrument appears in the works of Franz Schubert (Octet), Felix Mendelssohn (sonata with piano), Robert Schumann ("Phantasiestücke" for clarinet and piano, "Märchenerzählungen" with piano and viola), and Johannes Brahms (two sonatas, the Trio with cello and piano and the Clarinet Quintet for Clarinet in A and string quartet). Carl Maria von Weber wrote several major works for the clarinet, including the Clarinet Concerto No. 1 in F minor, the Clarinet Concerto No. 2 in E flat major, and the Grand Duo Concertant for clarinet and piano. However, from 1830 until 1900 "no major composer wrote a clarinet concerto, and the few concertos written for the instrument in this time period have not found a secure place in the repertoire".
The clarinet is used frequently in 20th- and 21st-century classical music. It embodies the cat in "Peter and the Wolf" by Sergei Prokofiev, and the symphonies of Shostakovich "provide a veritable compendium of writing for all members of the orchestral clarinet family; for him the instruments provided a toolkit for the expression of the deepest tragedy as well as the sharpest satire". Significant pieces for unaccompanied clarinet include "Three Pieces" (1919) by Igor Stravinsky and "L'abîme des oiseaux" from the "Quatuor pour la fin du temps" (1941) by Olivier Messiaen. Concertos with orchestral accompaniment from this period include those by Carl Nielsen and Aaron Copland. Sonatas were composed by Felix Draeseke, Max Reger, Arnold Bax, John Ireland, Francis Poulenc, Leonard Bernstein, and Paul Hindemith. Notable chamber works include "Four Pieces" by Alban Berg, "Contrastes" with violin and piano by Béla Bartók, "The Soldier's Tale" by Stravinsky, and the Suite for clarinet, violin and piano by Darius Milhaud.
Jazz.
The clarinet was a central instrument in jazz, beginning with early jazz players in the 1910s. It remained a signature instrument of the genre through much of the big band era into the 1940s. One of the most recognizable clarinet excerpts is the virtuoso glissando that introduces the 1924 "Rhapsody in Blue" by George Gershwin. Swing performers such as Benny Goodman and Artie Shaw rose to prominence in the late 1930s.
Beginning in the 1940s, the clarinet faded from its prominent position in jazz. By that time, an interest in Dixieland, a revival of traditional New Orleans jazz, had begun. Pete Fountain was one of the best known performers in this genre. The clarinet's place in the jazz ensemble was usurped by the saxophone, which projects a more powerful sound and uses a less complicated fingering system. The clarinet did not entirely disappear from jazz—prominent players since the 1950s include Stan Hasselgård, Jimmy Giuffre, Eric Dolphy (on bass clarinet), Perry Robinson, and John Carter. In the US, the prominent players on the instrument since the 1980s have included Eddie Daniels, Don Byron, Marty Ehrlich, Ken Peplowski, and others playing in both traditional and contemporary styles.
Other genres.
The clarinet is uncommon, but not unheard of, in rock music. Jerry Martini played clarinet on Sly and the Family Stone's 1968 hit, "Dance to the Music". The Beatles included a trio of clarinets in "When I'm Sixty-Four" from their "Sgt. Pepper's Lonely Hearts Club Band" album. A clarinet is prominently featured in what a "Billboard" reviewer termed a "Benny Goodman-flavored clarinet solo" in "Breakfast in America", the title song from the Supertramp album of the same name.
The clarinet has a significant role in vernacular music in many parts of the world. Clarinets feature prominently in klezmer music, which employs a distinctive style of playing. The popular Brazilian music style of choro uses the clarinet, as does Albanian "saze" and Greek "kompania" folk music, and Bulgarian wedding music. In Turkish folk music, the Albert system clarinet in G is often used, commonly called a "Turkish clarinet".
|
6434
|
38469862
|
https://en.wikipedia.org/wiki?curid=6434
|
Chojnów
|
Chojnów () is a small town in Legnica County, Lower Silesian Voivodeship, in south-western Poland. It is located on the Skora river, a tributary of the Kaczawa at an average altitude of above sea level. Chojnów is the administrative seat of the rural gmina called Gmina Chojnów, although the town is not part of its territory and forms a separate urban gmina. As of December 2021, the town has 13,002 inhabitants.
Chojnów is located west of Legnica, east from Bolesławiec and north of Złotoryja, from the A4 motorway. It has railroad connections to Bolesławiec and Legnica.
Heraldry.
The Chojnów coat of arms is a blue escutcheon featuring a white castle with three towers. To the right side of the central tower is a silver crescent moon and to its left side a golden sun. In the gate of the castle is a Silesian Eagle on a yellow background. Chojnów's motto is "Friendly Town".
Geography.
Chojnów is located in the Central-Western part of the Lower Silesia region. The Skora (Leather) River flows through the town in a westerly direction. The city of Chojnów is in area, including 41% agricultural land.
Chojnów has a connection with the major cities of the country (road and rail) and located south of Chojnów has the A4 Autostrada. To the South of the town is the surrounding Chojnowska Plain.
History.
The town is first mentioned in a Latin mediaeval document issued in Wrocław on February 26, 1253, stating, the Silesian Duke Henry III when the town is mentioned under the name Honowo. Possible the name of nearby Hainau Island. The name is of Polish origin, and in more modern records from the 19th century, the Polish name appears as "Hajnów", while "Haynau" is the Germanized version of the original Polish name.
The settlement of "Haynow" was mentioned in a 1272 deed. It was already called a "civitas" in a 1288 document issued by the Piast duke Henry V of Legnica, and officially received town privileges in 1333 from Duke Bolesław III the Generous. It was part of the duchies of Wrocław, Głogów and Legnica of fragmented Poland and remained under the rule of the Piast dynasty until 1675. Its population was predominantly Polish. In 1292 the first castellan of Chojnów, Bronisław Budziwojowic, was mentioned. In the 14th and early 15th centuries Chojnów was granted various privileges, including staple right and gold mining right, thanks to which it flourished.
The town survived the Hussites, who burned almost the entire town center and castle, but it quickly helped recover its former glory. The largest boom Chojnów experienced was in the 16th century, however by the end of that century began to decline due to fires and epidemic, which claimed many victims in 1613. During the Thirty Years' War (1618–1648), there was another outbreak in the city, it was occupied by the Austrians and Swedes and in 1642 it was also plundered by the Swedes. It remained part of the Piast-ruled Duchy of Legnica until its dissolution in 1675, when it was incorporated to Habsburg-ruled Bohemia.
In the 18th century, cloth production developed and a clothmaking school was established in the town. One of two main routes connecting Warsaw and Dresden ran through the town in the 18th century and Kings Augustus II the Strong and Augustus III of Poland traveled that route numerous times. In 1740 the town was captured by Prussia and subsequently annexed in 1742. In 1804 it suffered a flood. During the Napoleonic wars there were more epidemics. In 1813 in Chojnów, Napoleon Bonaparte issued instructions regarding the reorganization of the 8th Polish Corps of Prince Józef Poniatowski. The event is commemorated by a plaque in the facade of the Piast Castle. A railway line was opened in the 19th century. Sewer, Gas lighting a Newspaper and a hospital soon followed as the towns economy improved.
The city was not spared in World War II, with 30% of the town being destroyed on February 10, 1945, when Soviet Red Army troops took the abandoned town. After World War II and the implementation of the Oder-Neisse line in 1945, the town passed to the Republic of Poland. It was repopulated by Poles, expelled from former eastern Poland annexed by the Soviet Union. In 1946 it was renamed "Chojnów", a more modern version of the old Polish "Hajnów". Also Greeks, refugees of the Greek Civil War, settled in Chojnów.
Economy.
Chojnów is an industrial and agricultural town. Among local products are: paper, agricultural machinery, chains, metal furniture for hospitals, equipment for the meat industry, beer, wine, leather clothing, and clothing for infants, children and adults.
Sights and nature.
Among the interesting monuments of Chojnów are the 13th-century castle of the Dukes of Legnica (currently used as a museum), two old churches, the "Baszta Tkaczy" ("Weavers' Tower") and preserved fragments of city walls.
The biggest green area in Chojnów is small forest "Park Piastowski" ("Piast's Park"), named after Piast dynasty. Wild animals that can be found in the Chojnów area are roe deer, foxes, rabbits and wild domestic animals, especially cats.
Culture and sport.
Every year in the first days of June, the "Days of Chojnów" ("Dni Chojnowa") are celebrated. The Whole-Poland bike race "Masters" has been organized yearly in Chojnów for the past few years.
Chojnów has a Municipal sports and recreation center formed in 2008 holding various events, festivals, reviews, exhibitions, and competitions. The regional Museum is housed in the old Piast era castle. The collections include tiles, relics, and the castle garden. Next to the Museum there is a municipal library. In śródmiejskim Park, near the Town Hall is the amphitheatre.
The local government-run weekly newspaper is Gazeta Chojnowska, which has been published since 1992.
It is published biweekly. Editions have a run of 900 copies and it is one of the oldest newspapers in Poland issued without interruption. The "Chojnów" is the official newspaper of Chojnów with copy run of 750 copies.
Education.
In Chojnów, there are two kindergartens, two elementary schools and two middle schools.
Religion.
Chojnów is in the Catholic deanery of Chojnów and has two parishes, Immaculate Conception of the Blessed Virgin Mary and also the Holy Apostles Peter and Paul. Both parishes have active congregations.
There are also two Congregations of Jehovah's witnesses.
Twin towns – sister cities.
Chojnów is twinned with:
|
6435
|
42021989
|
https://en.wikipedia.org/wiki?curid=6435
|
Canes Venatici
|
Canes Venatici ( ) is one of the 88 constellations designated by the International Astronomical Union (IAU). It is a small northern constellation that was created by Johannes Hevelius in the 17th century. Its name is Latin for 'hunting dogs', and the constellation is often depicted in illustrations as representing the dogs of Boötes the Herdsman, a neighboring constellation.
Cor Caroli is the constellation's brightest star, with an apparent magnitude of 2.9. La Superba (Y CVn) is one of the reddest naked-eye stars and one of the brightest carbon stars. The Whirlpool Galaxy is a spiral galaxy tilted face-on to observers on Earth, and was the first galaxy whose spiral nature was discerned. In addition, quasar TON 618 is one of the most massive black holes with the mass of 66 billion solar masses.
History.
The stars of Canes Venatici are not bright. In classical times, they were listed by Ptolemy as unfigured stars below the constellation Ursa Major in his star catalogue.
In medieval times, the identification of these stars with the dogs of Boötes arose through a mistranslation: some of Boötes's stars were traditionally described as representing the club (, ) of Boötes. When the Greek astronomer Ptolemy's "Almagest" was translated from Greek to Arabic, the translator Hunayn ibn Ishaq did not know the Greek word and rendered it as a similar-sounding compound Arabic word for a kind of weapon, writing , which means 'the staff having a hook'.
When the Arabic text was later translated into Latin, the translator, Gerard of Cremona, mistook ('hook') for ('dogs'). Both written words look the same in Arabic text without diacritics, leading Gerard to write it as ('spearshaft-having dogs').
In 1533, the German astronomer Peter Apian depicted Boötes as having two dogs with him.
These spurious dogs floated about the astronomical literature until Hevelius decided to make them a separate constellation in 1687. Hevelius chose the name "Asterion" for the northern dog and "Chara" for the southern dog, as , 'the hunting dogs', in his star atlas.
In his star catalogue, the Czech astronomer Antonín Bečvář assigned the names "Asterion" to β CVn and "Chara" to α CVn.
Although the International Astronomical Union dropped several constellations in 1930 that were medieval and Renaissance innovations, Canes Venatici survived to become one of the 88 IAU designated constellations.
Neighbors and borders.
Canes Venatici is bordered by Ursa Major to the north and west, Coma Berenices to the south, and Boötes to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CVn". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 14 sides.
In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between +27.84° and +52.36°. Covering 465 square degrees, it ranks 38th of the 88 constellations in size.
Prominent stars and deep-sky objects.
Stars.
Canes Venatici contains no very bright stars. The Bayer designation stars, Alpha and Beta Canum Venaticorum are only of third and fourth magnitude respectively. Flamsteed catalogued 25 stars in the constellation, labelling them 1 to 25 Canum Venaticorum (CVn); however, 1CVn turned out to be in Ursa Major, 13CVn was in Coma Berenices, and 22CVn did not exist.
Supervoid.
The Giant Void, an extremely large void (part of the universe containing very few galaxies), is within the vicinity of this constellation. It is regarded to be the second largest void ever discovered, slightly larger than the Eridanus Supervoid and smaller than the proposed KBC Void and 1,200 times the volume of expected typical voids. It was discovered in 1988 in a deep-sky survey. Its centre is approximately 1.5 billion light-years away.
Deep-sky objects.
Canes Venatici contains five Messier objects, including four galaxies. One of the more significant galaxies in Canes Venatici is the Whirlpool Galaxy (M51, NGC 5194) and NGC 5195, a small barred spiral galaxy that is seen face-on. This was the first galaxy recognised as having a spiral structure, this structure being first observed by Lord Rosse in 1845. It is a face-on spiral galaxy 37 million light-years from Earth. Widely considered to be one of the most beautiful galaxies visible, M51 has many star-forming regions and nebulae in its arms, coloring them pink and blue in contrast to the older yellow core. M 51 has a smaller companion, NGC 5195, that has very few star-forming regions and thus appears yellow. It is passing behind M 51 and may be the cause of the larger galaxy's prodigious star formation.
Other notable spiral galaxies in Canes Venatici are the Sunflower Galaxy (M63, NGC 5055), M94 (NGC 4736), and M106 (NGC 4258).
Ton 618 is a hyperluminous quasar and blazar in this constellation, near its border with the neighboring Coma Berenices. It possesses a black hole with a mass 66 billion times that of the Sun, making it one of the most massive black holes ever measured. There is also a Lyman-alpha blob.
|
6436
|
42021989
|
https://en.wikipedia.org/wiki?curid=6436
|
Chamaeleon
|
Chamaeleon () is a small constellation in the deep southern sky. It is named after the chameleon, a kind of lizard. It was first defined in the 16th century.
History.
Chamaeleon was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius and Jodocus Hondius. Johann Bayer was the first uranographer to put Chamaeleon in a celestial atlas. It was one of many constellations created by European explorers in the 15th and 16th centuries out of unfamiliar Southern Hemisphere stars.
Features.
Stars.
There are four bright stars in Chamaeleon that form a compact diamond-shape approximately 10 degrees from the south celestial pole and about 15 degrees south of Acrux, along the axis formed by Acrux and Gamma Crucis. Alpha Chamaeleontis is a white-hued star of magnitude 4.1, 63 light-years from Earth. Beta Chamaeleontis is a blue-white hued star of magnitude 4.2, 271 light-years from Earth. Gamma Chamaeleontis is a red-hued giant star of magnitude 4.1, 413 light-years from Earth. The other bright star in Chamaeleon is Delta Chamaeleontis, a wide double star. The brighter star is Delta2 Chamaeleontis, a blue-hued star of magnitude 4.4. Delta1 Chamaeleontis, the dimmer component, is an orange-hued giant star of magnitude 5.5. They both lie about 350 light years away.
Chamaeleon is also the location of Cha 110913, a unique dwarf star or proto solar system.
Deep-sky objects.
In 1999, a nearby open cluster was discovered centered on the star η Chamaeleontis. The cluster, known as either
the Eta Chamaeleontis cluster or Mamajek 1, is 8 million years old, and lies 316 light years from Earth.
The constellation contains a number of molecular clouds (the Chamaeleon dark clouds) that are forming low-mass T Tauri stars. The cloud complex lies some 400 to 600 light years from Earth, and contains tens of thousands of solar masses of gas and dust. The most prominent cluster of T Tauri stars and young B-type stars are in the Chamaeleon I cloud, and are associated with the reflection nebula IC 2631.
Chamaeleon contains one planetary nebula, NGC 3195, which is fairly faint. It appears in a telescope at about the same apparent size as Jupiter.
Equivalents.
In Chinese astronomy, the stars that form Chamaeleon were classified as the Little Dipper () among the Southern Asterisms () by Xu Guangqi. Chamaeleon is sometimes also called the Frying Pan in Australia.
|
6437
|
2428506
|
https://en.wikipedia.org/wiki?curid=6437
|
Cholesterol
|
Cholesterol is the principal sterol of all higher animals, distributed in body tissues, especially the brain and spinal cord, and in animal fats and oils.
Cholesterol is biosynthesized by all animal cells and is an essential structural and signaling component of animal cell membranes. In vertebrates, hepatic cells typically produce the greatest amounts. In the brain, astrocytes produce cholesterol and transport it to neurons. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as "Mycoplasma", which require cholesterol for growth. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D.
Elevated levels of cholesterol in the blood, especially when bound to low-density lipoprotein (LDL, often referred to as "bad cholesterol"), may increase the risk of cardiovascular disease.
François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. In 1815, chemist Michel Eugène Chevreul named the compound "cholesterine".
Etymology.
The word "cholesterol" comes from Ancient Greek "chole-" 'bile' and "stereos" 'solid', followed by the chemical suffix "-ol" for an alcohol.
Physiology.
Cholesterol is essential for all animal life. While most cells are capable of synthesizing it, the majority of cholesterol is ingested or synthesized by hepatocytes and transported in the blood to peripheral cells. The levels of cholesterol in peripheral tissues are dictated by a balance of uptake and export. Under normal conditions, brain cholesterol is separate from peripheral cholesterol, i.e., the dietary and hepatic cholesterol do not cross the blood brain barrier. Rather, astrocytes produce and distribute cholesterol in the brain.
De novo synthesis, both in astrocytes and hepatocytes, occurs by a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes.
Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis. For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood. Surprisingly, in rats, blood cholesterol is inversely correlated with cholesterol consumption. The more cholesterol a rat eats the lower the blood cholesterol. During the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase.
Plants make cholesterol in very small amounts. In larger quantities they produce phytosterols, chemically similar substances which can compete with cholesterol for reabsorption in the intestinal tract, thus potentially reducing cholesterol reabsorption. When intestinal lining cells absorb phytosterols, in place of cholesterol, they usually excrete the phytosterol molecules back into the GI tract, an important protective mechanism. The intake of naturally occurring phytosterols, which encompass plant sterols and stanols, ranges between ≈200–300 mg/day depending on eating habits. Specially designed vegetarian experimental diets have been produced yielding upwards of 700 mg/day.
Function.
Membranes.
Cholesterol is present in varying degrees in all animal cell membranes, but is absent in prokaryotes. It is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move.
The structure of the tetracyclic ring of cholesterol contributes to the fluidity of the cell membrane, as the molecule is in a "trans" conformation making all but the side chain of cholesterol rigid and planar. In this structural role, cholesterol also reduces the permeability of the plasma membrane to neutral solutes, hydrogen ions, and sodium ions.
Substrate presentation.
Cholesterol regulates the biological process of substrate presentation and the enzymes that use substrate presentation as a mechanism of their activation. Phospholipase D2 (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to cholesterol dependent lipid domains sometimes called "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate (PIP2). PLD2 has a PIP2 binding domain. When PIP2 concentration in the membrane increases, PLD2 leaves the cholesterol-dependent domains and binds to PIP2 where it then gains access to its substrate PC and commences catalysis based on substrate presentation.
Signaling.
Cholesterol is also implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane, which brings receptor proteins in close proximity with high concentrations of second messenger molecules. In multiple layers, cholesterol and phospholipids, both electrical insulators, can facilitate speed of transmission of electrical impulses along nerve tissue. For many neuron fibers, a myelin sheath, rich in cholesterol since it is derived from compacted layers of Schwann cell or oligodendrocyte membranes, provides insulation for more efficient conduction of impulses.Demyelination (loss of myelin) is believed to be part of the basis for multiple sclerosis.
Cholesterol binds to and affects the gating of a number of ion channels such as the nicotinic acetylcholine receptor, GABAA receptor, and the inward-rectifier potassium channel. Cholesterol also activates the estrogen-related receptor alpha (ERRα), and may be the endogenous ligand for the receptor. The constitutively active nature of the receptor may be explained by the fact that cholesterol is ubiquitous in the body. Inhibition of ERRα signaling by reduction of cholesterol production has been identified as a key mediator of the effects of statins and bisphosphonates on bone, muscle, and macrophages. On the basis of these findings, it has been suggested that the ERRα should be de-orphanized and classified as a receptor for cholesterol.
As a chemical precursor.
Within cells, cholesterol is also a precursor molecule for several biochemical pathways. For example, it is the precursor molecule for the synthesis of vitamin D in the calcium metabolism and all steroid hormones, including the adrenal gland hormones cortisol and aldosterone, as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives.
Epidermis.
The stratum corneum is the outermost layer of the epidermis. It is composed of terminally differentiated and enucleated corneocytes that reside within a lipid matrix, like "bricks and mortar." Together with ceramides and free fatty acids, cholesterol forms the lipid mortar, a water-impermeable barrier that prevents evaporative water loss. As a rule of thumb, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (≈50% by weight), cholesterol (≈25% by weight), and free fatty acids (≈15% by weight), with smaller quantities of other lipids also being present. Cholesterol sulfate reaches its highest concentration in the granular layer of the epidermis. Steroid sulfate sulfatase then decreases its concentration in the stratum corneum, the outermost layer of the epidermis. The relative abundance of cholesterol sulfate in the epidermis varies across different body sites with the heel of the foot having the lowest concentration.
Metabolism.
Cholesterol is recycled in the body. The liver excretes cholesterol into biliary fluids, which are then stored in the gallbladder, which then excretes them in a non-esterified form (via bile) into the digestive tract. Typically, about 50% of the excreted cholesterol is reabsorbed by the small intestine back into the bloodstream.
Biosynthesis and regulation.
Biosynthesis.
Almost all animal tissues synthesize cholesterol from acetyl-CoA. All animal cells (exceptions exist within the invertebrates) manufacture cholesterol, for both membrane structure and other uses, with relative production rates varying by cell type and organ function. About 80% of total daily cholesterol production occurs in the liver and the intestines; other sites of higher synthesis rates include the brain, the adrenal glands, and the reproductive organs.
Synthesis within the body starts with the mevalonate pathway where two molecules of acetyl CoA condense to form acetoacetyl-CoA. This is followed by a second condensation between acetyl CoA and acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA).
This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. Production of mevalonate is the rate-limiting and irreversible step in cholesterol synthesis and is the site of action for statins (a class of cholesterol-lowering drugs).
Mevalonate is finally converted to isopentenyl pyrophosphate (IPP) through two phosphorylation steps and one decarboxylation step that requires ATP.
Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase.
Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum.
Oxidosqualene cyclase then cyclizes squalene to form lanosterol.
Finally, lanosterol is converted to cholesterol via either of two pathways, the Bloch pathway, or the Kandutsch-Russell pathway.
The final 19 steps to cholesterol contain NADPH and oxygen to help oxidize methyl groups for removal of carbons, mutases to move alkene groups, and NADH to help reduce ketones.
Konrad Bloch and Feodor Lynen shared the Nobel Prize in Physiology or Medicine in 1964 for their discoveries concerning some of the mechanisms and methods of regulation of cholesterol and fatty acid metabolism.
Regulation of cholesterol synthesis.
Biosynthesis of cholesterol is directly regulated by the cholesterol levels present, though the homeostatic mechanisms involved are only partly understood. A higher intake of food leads to a net decrease in endogenous production, whereas a lower intake of food has the opposite effect. The main regulatory mechanism is the sensing of intracellular cholesterol in the endoplasmic reticulum by the protein SREBP (sterol regulatory element-binding protein 1 and 2). In the presence of cholesterol, SREBP is bound to two other proteins: SCAP (SREBP cleavage-activating protein) and INSIG-1. When cholesterol levels fall, INSIG-1 dissociates from the SREBP-SCAP complex, which allows the complex to migrate to the Golgi apparatus. Here SREBP is cleaved by S1P and S2P (site-1 protease and site-2 protease), two enzymes that are activated by SCAP when cholesterol levels are low.
The cleaved SREBP then migrates to the nucleus and acts as a transcription factor to bind to the sterol regulatory element (SRE), which stimulates the transcription of many genes. Among these are the low-density lipoprotein (LDL) receptor and HMG-CoA reductase. The LDL receptor scavenges circulating LDL from the bloodstream, whereas HMG-CoA reductase leads to an increase in endogenous production of cholesterol. A large part of this signaling pathway was clarified by Dr. Michael S. Brown and Dr. Joseph L. Goldstein in the 1970s. In 1985, they received the Nobel Prize in Physiology or Medicine for their work. Their subsequent work shows how the SREBP pathway regulates the expression of many genes that control lipid formation and metabolism and body fuel allocation.
Cholesterol synthesis can also be turned off when cholesterol levels are high. HMG-CoA reductase contains both a cytosolic domain (responsible for its catalytic function) and a membrane domain. The membrane domain senses signals for its degradation. Increasing concentrations of cholesterol (and other sterols) cause a change in this domain's oligomerization state, which makes it more susceptible to destruction by the proteasome. This enzyme's activity can also be reduced by phosphorylation by an AMP-activated protein kinase. Because this kinase is activated by AMP, which is produced when ATP is hydrolyzed, it follows that cholesterol synthesis is halted when ATP levels are low.
Plasma transport and regulation of absorption.
As an isolated molecule, cholesterol is only minimally soluble in water, or hydrophilic. Because of this, it dissolves in blood at exceedingly small concentrations. To be transported effectively, cholesterol is instead packaged within lipoproteins, complex discoidal particles with exterior amphiphilic proteins and lipids, whose outward-facing surfaces are water-soluble and inward-facing surfaces are lipid-soluble. This allows it to travel through the blood via emulsification. Unbound cholesterol, being amphipathic, is transported in the monolayer surface of the lipoprotein particle along with phospholipids and proteins. Cholesterol esters bound to fatty acid, on the other hand, are transported within the fatty hydrophobic core of the lipoprotein, along with triglyceride.
There are several types of lipoproteins in the blood. In order of increasing density, they are chylomicrons, very-low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). Lower protein/lipid ratios make for less dense lipoproteins. Cholesterol within different lipoproteins is identical, although some are carried as their native "free" alcohol form (the cholesterol-OH group facing the water surrounding the particles), while others as fatty acyl esters, known also as cholesterol esters, within the particles.
Lipoprotein particles are organized by complex apolipoproteins, typically 80–100 different proteins per particle, which can be recognized and bound by specific receptors on cell membranes, directing their lipid payload into specific cells and tissues currently ingesting these fat transport particles. These surface receptors serve as unique molecular signatures, which then help determine fat distribution delivery throughout the body.
Chylomicrons, the least dense cholesterol transport particles, contain apolipoprotein B-48, apolipoprotein C, and apolipoprotein E (the principal cholesterol carrier in the brain) in their shells. Chylomicrons carry fats from the intestine to muscle and other tissues in need of fatty acids for energy or fat production. Unused cholesterol remains in more cholesterol-rich chylomicron remnants and is taken up from here to the bloodstream by the liver.
VLDL particles are produced by the liver from triacylglycerol and cholesterol which was not used in the synthesis of bile acids. These particles contain apolipoprotein B100 and apolipoprotein E in their shells and can be degraded by lipoprotein lipase on the artery wall to IDL. This arterial wall cleavage allows absorption of triacylglycerol and increases the concentration of circulating cholesterol. IDL particles are then consumed in two processes: half is metabolized by HTGL and taken up by the LDL receptor on the liver cell surfaces, while the other half continues to lose triacylglycerols in the bloodstream until they become cholesterol-laden LDL particles.
LDL particles are the major blood cholesterol carriers. Each one contains approximately 1,500 molecules of cholesterol ester. LDL particle shells contain just one molecule of apolipoprotein B100, recognized by LDL receptors in peripheral tissues. Upon binding of apolipoprotein B100, many LDL receptors concentrate in clathrin-coated pits. Both LDL and its receptor form vesicles within a cell via endocytosis. These vesicles then fuse with a lysosome, where the lysosomal acid lipase enzyme hydrolyzes the cholesterol esters. The cholesterol can then be used for membrane biosynthesis or esterified and stored within the cell, so as to not interfere with the cell membranes.
LDL receptors are used up during cholesterol absorption, and its synthesis is regulated by SREBP, the same protein that controls the synthesis of cholesterol "de novo", according to its presence inside the cell. A cell with abundant cholesterol will have its LDL receptor synthesis blocked, to prevent new cholesterol in LDL particles from being taken up. Conversely, LDL receptor synthesis proceeds when a cell is deficient in cholesterol.
When this process becomes unregulated, LDL particles without receptors begin to appear in the blood. These LDL particles are oxidized and taken up by macrophages, which become engorged and form foam cells. These foam cells often become trapped in the walls of blood vessels and contribute to atherosclerotic plaque formation. Differences in cholesterol homeostasis affect the development of early atherosclerosis (carotid intima-media thickness). These plaques are the main causes of heart attacks, strokes, and other serious medical problems, leading to the association of so-called LDL cholesterol (actually a lipoprotein) with "bad" cholesterol.
HDL particles are thought to transport cholesterol back to the liver, either for excretion or for other tissues that synthesize hormones, in a process known as reverse cholesterol transport (RCT). Large numbers of HDL particles correlates with better health outcomes, whereas low numbers of HDL particles is associated with atheromatous disease progression in the arteries.
Metabolism, recycling and excretion.
Cholesterol is susceptible to oxidation and easily forms oxygenated derivatives called oxysterols. Three different mechanisms can form these: autoxidation, secondary oxidation to lipid peroxidation, and cholesterol-metabolizing enzyme oxidation. A great interest in oxysterols arose when they were shown to exert inhibitory actions on cholesterol biosynthesis. This finding became known as the "oxysterol hypothesis". Additional roles for oxysterols in human physiology include their participation in bile acid biosynthesis, function as transport forms of cholesterol, and regulation of gene transcription.
In biochemical experiments, radiolabelled forms of cholesterol, such as tritiated-cholesterol, are used. These derivatives undergo degradation upon storage, and it is essential to purify cholesterol prior to use. Cholesterol can be purified using small Sephadex LH-20 columns.
Cholesterol is oxidized by the liver into a variety of bile acids. These, in turn, are conjugated with glycine, taurine, glucuronic acid, or sulfate. A mixture of conjugated and nonconjugated bile acids, along with cholesterol itself, is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines, and the remainder are lost in the feces. The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation, which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones (lecithin and bilirubin gallstones also occur, but less frequently). Every day, up to 1 g of cholesterol enters the colon. This cholesterol originates from the diet, bile, and desquamated intestinal cells, and it can be metabolized by the colonic bacteria. Cholesterol is converted mainly into coprostanol, a nonabsorbable sterol that is excreted in the feces.
Although cholesterol is a steroid generally associated with mammals, the human pathogen "Mycobacterium tuberculosis" is able to completely degrade this molecule and contains a large number of genes that are regulated by its presence. Many of these cholesterol-regulated genes are homologues of fatty acid β-oxidation genes, but have evolved in such a way as to bind large steroid substrates like cholesterol.
Dietary sources.
Animal fats are complex mixtures of triglycerides, with lesser amounts of both the phospholipids and cholesterol molecules from which all animal (and human) cell membranes are constructed. Since all animal cells manufacture cholesterol, all animal-based foods contain cholesterol in varying amounts. Major dietary sources of cholesterol include red meat, egg yolks and whole eggs, liver, kidney, giblets, fish oil, shellfish, and butter. Human breast milk also contains significant quantities of cholesterol.
Plant cells synthesize cholesterol as a precursor for other compounds, such as phytosterols and steroidal glycoalkaloids, with cholesterol remaining in plant foods only in minor amounts or absent. Some plant foods, such as avocado, flax seeds and peanuts, contain phytosterols, which compete with cholesterol for absorption in the intestines and reduce the absorption of both dietary and bile cholesterol. A typical diet contributes on the order of 0.2 gram of phytosterols, which is not enough to have a significant impact on blocking cholesterol absorption. Phytosterols intake can be supplemented through the use of phytosterol-containing functional foods or dietary supplements that are recognized as having potential to reduce levels of LDL-cholesterol.
Medical guidelines and recommendations.
In 2015, the scientific advisory panel of U.S. Department of Health and Human Services and U.S. Department of Agriculture for the 2015 iteration of the Dietary Guidelines for Americans dropped the previously recommended limit of consumption of dietary cholesterol to 300 mg per day with a new recommendation to "eat as little dietary cholesterol as possible", thereby acknowledging an association between a diet low in cholesterol and reduced risk of cardiovascular disease.
A 2013 report by the American Heart Association and the American College of Cardiology recommended focusing on healthy dietary patterns rather than specific cholesterol limits, as they are hard for clinicians and consumers to implement. They recommend the DASH and Mediterranean diet, which are low in cholesterol. A 2017 review by the American Heart Association recommends switching saturated fats for polyunsaturated fats to reduce cardiovascular disease risk.
Some supplemental guidelines have recommended doses of phytosterols in the 1.6–3.0 grams per day range (Health Canada, EFSA, ATP III, FDA). A meta-analysis demonstrated a 12% reduction in LDL-cholesterol at a mean dose of 2.1 grams per day. The benefits of a diet supplemented with phytosterols have also been questioned.
Clinical significance.
Hypercholesterolemia.
According to the lipid hypothesis, elevated levels of cholesterol in the blood lead to atherosclerosis which may increase the risk of heart attack, stroke, and peripheral artery disease. Since higher blood LDL – especially higher LDL concentrations and smaller LDL particle size – contributes to this process more than the cholesterol content of the HDL particles, LDL particles are often termed "bad cholesterol". High concentrations of functional HDL, which can remove cholesterol from cells and atheromas, offer protection and are commonly referred to as "good cholesterol". These balances are mostly genetically determined, but can be changed by body composition, medications, diet, and other factors. A 2007 study demonstrated that blood total cholesterol levels have an exponential effect on cardiovascular and total mortality, with the association more pronounced in younger subjects. Because cardiovascular disease is relatively rare in the younger population, the impact of high cholesterol on health is larger in older people.
Elevated levels of the lipoprotein fractions, LDL, IDL and VLDL, rather than the total cholesterol level, correlate with the extent and progress of atherosclerosis. Conversely, the total cholesterol can be within normal limits, yet be made up primarily of small LDL and small HDL particles, under which conditions atheroma growth rates are high. A "post hoc" analysis of the IDEAL and the EPIC prospective studies found an association between high levels of HDL cholesterol (adjusted for apolipoprotein A-I and apolipoprotein B) and increased risk of cardiovascular disease, casting doubt on the cardioprotective role of "good cholesterol".
About one in 250 individuals can have a genetic mutation for the LDL cholesterol receptor that causes them to have familial hypercholesterolemia. Inherited high cholesterol can also include genetic mutations in the PCSK9 gene and the gene for apolipoprotein B.
Elevated cholesterol levels are treatable by a diet that reduces or eliminates saturated fat, and trans fats, often followed by one of various hypolipidemic agents, such as statins, fibrates, cholesterol absorption inhibitors, monoclonal antibody therapy (PCSK9 inhibitors), nicotinic acid derivatives or bile acid sequestrants. There are several international guidelines on the treatment of hypercholesterolemia.
Human trials using HMG-CoA reductase inhibitors, known as statins, have repeatedly confirmed that changing lipoprotein transport patterns from unhealthy to healthier patterns significantly lowers cardiovascular disease event rates, even for people with cholesterol values currently considered low for adults. Studies have shown that reducing LDL cholesterol levels by about 38.7 mg/dL with the use of statins can reduce cardiovascular disease and stroke risk by about 21%. Studies have also found that statins reduce atheroma progression. As a result, people with a history of cardiovascular disease may derive benefit from statins irrespective of their cholesterol levels (total cholesterol below 5.0 mmol/L [193 mg/dL]), and in men without cardiovascular disease, there is benefit from lowering abnormally high cholesterol levels ("primary prevention"). Primary prevention in women was originally practiced only by extension of the findings in studies on men, since, in women, none of the large statin trials conducted prior to 2007 demonstrated a significant reduction in overall mortality or in cardiovascular endpoints. Meta-analyses have demonstrated significant reductions in all-cause and cardiovascular mortality, without significant heterogeneity by sex.
The 1987 report of National Cholesterol Education Program, Adult Treatment Panels suggests the total blood cholesterol level should be: < 200 mg/dL normal blood cholesterol, 200–239 mg/dL borderline-high, > 240 mg/dL high cholesterol. The American Heart Association provides a similar set of guidelines for total (fasting) blood cholesterol levels and risk for heart disease: Statins are effective in lowering LDL cholesterol and widely used for primary prevention in people at high risk of cardiovascular disease, as well as in secondary prevention for those who have developed cardiovascular disease. The average global mean total Cholesterol for humans has remained at about 4.6 mmol/L (178 mg/dL) for men and women, both crude and age standardized, for nearly 40 years from 1980 to 2018, with some regional variations and reduction of total Cholesterol in Western nations.
More current testing methods determine LDL ("bad") and HDL ("good") cholesterol separately, allowing cholesterol analysis to be more nuanced. The desirable LDL level is considered to be less than 100 mg/dL (2.6 mmol/L).
Total cholesterol is defined as the sum of HDL, LDL, and VLDL. Usually, only the total, HDL, and triglycerides are measured. For cost reasons, the VLDL is usually estimated as one-fifth of the triglycerides and the LDL is estimated using the Friedewald formula (or a variant): estimated LDL = [total cholesterol] − [total HDL] − [estimated VLDL]. Direct LDL measures are used when triglycerides exceed 400 mg/dL. The estimated VLDL and LDL have more error when triglycerides are above 400 mg/dL.
In the Framingham Heart Study, each 10 mg/dL (0.6 mmol/L) increase in total cholesterol levels increased 30-year overall mortality by 5% and CVD mortality by 9%. While subjects over the age of 50 had an 11% increase in overall mortality, and a 14% increase in cardiovascular disease mortality per 1 mg/dL (0.06 mmol/L) year drop in total cholesterol levels. The researchers attributed this phenomenon to a different correlation, whereby the disease itself increases risk of death, as well as changes a myriad of factors, such as weight loss and the inability to eat, which lower serum cholesterol. This effect was also shown in men of all ages and women over 50 in the Vorarlberg Health Monitoring and Promotion Programme. These groups were more likely to die of cancer, liver diseases, and mental diseases with very low total cholesterol, of 186 mg/dL (10.3 mmol/L) and lower. This result indicates the low-cholesterol effect occurs even among younger respondents, contradicting the previous assessment among cohorts of older people that this is a marker for frailty occurring with age.
Hypocholesterolemia.
Abnormally low levels of cholesterol are termed "hypocholesterolemia". Research into the causes of this state is relatively limited, but some studies suggest a link with depression, cancer, and cerebral hemorrhage. In general, the low cholesterol levels seem to be a consequence, rather than a cause, of an underlying illness. A genetic defect in cholesterol synthesis causes Smith–Lemli–Opitz syndrome, which is often associated with low plasma cholesterol levels. Hyperthyroidism, or any other endocrine disturbance which causes upregulation of the LDL receptor, may result in hypocholesterolemia.
Testing.
The American Heart Association recommends testing cholesterol every 4–6 years for people aged 20 years or older. A separate set of American Heart Association guidelines issued in 2013 indicates that people taking statin medications should have their cholesterol tested 4–12 weeks after their first dose and then every 3–12 months thereafter. For men ages 45 to 65 and women ages 55 to 65, a cholesterol test should occur every 1–2 years, and for seniors over age 65, an annual test should be performed.
A blood sample after 12-hours of fasting is taken by a healthcare professional from an arm vein to measure a lipid profile for a) total cholesterol, b) HDL cholesterol, c) LDL cholesterol, and d) triglycerides. Results may be expressed as "calculated", indicating a calculation of total cholesterol, HDL, and triglycerides.
Cholesterol is tested to determine for "normal" or "desirable" levels if a person has a total cholesterol of 5.2 mmol/L or less (200 mg/dL), an HDL value of more than 1 mmol/L (40 mg/dL, "the higher, the better"), an LDL value of less than 2.6 mmol/L (100 mg/dL), and a triglycerides level of less than 1.7 mmol/L (150 mg/dL). Blood cholesterol in people with lifestyle, aging, or cardiovascular risk factors, such as diabetes mellitus, hypertension, family history of coronary artery disease, or angina, are evaluated at different levels.
Cholesteric liquid crystals.
Some cholesterol derivatives (among other simple cholesteric lipids) are known to generate the cholesteric liquid crystalline phase. The cholesteric phase is, in fact, a chiral nematic phase, and it changes colour when its temperature changes. This makes cholesterol derivatives useful for indicating temperature in liquid-crystal display thermometers and in temperature-sensitive paints.
Stereoisomers.
Cholesterol has 256 stereoisomers that arise from its eight stereocenters, although only two of the stereoisomers have biochemical significance ("nat"-cholesterol and "ent"-cholesterol, for "natural" and "enantiomer", respectively), and only one occurs naturally ("nat"-cholesterol).
|
6438
|
27823944
|
https://en.wikipedia.org/wiki?curid=6438
|
Chromosome
|
A chromosome is a package of DNA containing part or all of the genetic material of an organism. In most chromosomes, the very long thin DNA fibers are coated with nucleosome-forming packaging proteins; in eukaryotic cells, the most important of these proteins are the histones. Aided by chaperone proteins, the histones bind to and condense the DNA molecule to maintain its integrity. These eukaryotic chromosomes display a complex three-dimensional structure that has a significant role in transcriptional regulation.
Normally, chromosomes are visible under a light microscope only during the metaphase of cell division, where all chromosomes are aligned in the center of the cell in their condensed form. Before this stage occurs, each chromosome is duplicated (S phase), and the two copies are joined by a centromere—resulting in either an X-shaped structure if the centromere is located equatorially, or a two-armed structure if the centromere is located distally; the joined copies are called 'sister chromatids'. During metaphase, the duplicated structure (called a 'metaphase chromosome') is highly condensed and thus easiest to distinguish and study. In animal cells, chromosomes reach their highest compaction level in anaphase during chromosome segregation.
Chromosomal recombination during meiosis and subsequent sexual reproduction plays a crucial role in genetic diversity. If these structures are manipulated incorrectly, through processes known as chromosomal instability and translocation, the cell may undergo mitotic catastrophe. This will usually cause the cell to initiate apoptosis, leading to its own death, but the process is occasionally hampered by cell mutations that result in the progression of cancer.
The term 'chromosome' is sometimes used in a wider sense to refer to the individualized portions of chromatin in cells, which may or may not be visible under light microscopy. In a narrower sense, 'chromosome' can be used to refer to the individualized portions of chromatin during cell division, which are visible under light microscopy due to high condensation.
Etymology.
The word "chromosome" () comes from the Ancient Greek words (', "colour") and (', "body"), describing the strong staining produced by particular dyes. The term was coined by the German anatomist Heinrich Wilhelm Waldeyer, referring to the term 'chromatin', which was introduced by Walther Flemming.
Some of the early karyological terms have become outdated. For example, 'chromatin' (Flemming 1880) and 'chromosom' (Waldeyer 1888) both ascribe color to a non-colored state.
History of discovery.
Otto Bütschli was the first scientist to recognize the structures now known as chromosomes.
In a series of experiments beginning in the mid-1880s, Theodor Boveri gave definitive contributions to elucidating that chromosomes are the vectors of heredity, with two notions that became known as 'chromosome continuity' and 'chromosome individuality'.
Wilhelm Roux suggested that every chromosome carries a different genetic configuration, and Boveri was able to test and confirm this hypothesis. Aided by the rediscovery at the start of the 1900s of Gregor Mendel's earlier experimental work, Boveri identified the connection between the rules of inheritance and the behaviour of the chromosomes. Two generations of American cytologists were influenced by Boveri: Edmund Beecher Wilson, Nettie Stevens, Walter Sutton and Theophilus Painter (Wilson, Stevens, and Painter actually worked with him).
In his famous textbook, "The Cell in Development and Heredity", Wilson linked together the independent work of Boveri and Sutton (both around 1902) by naming the chromosome theory of inheritance the 'Boveri–Sutton chromosome theory' (sometimes known as the 'Sutton–Boveri chromosome theory'). Ernst Mayr remarks that the theory was hotly contested by some famous geneticists, including William Bateson, Wilhelm Johannsen, Richard Goldschmidt and T.H. Morgan, all of a rather dogmatic mindset. Eventually, absolute proof came from chromosome maps in Morgan's own laboratory.
The number of human chromosomes was published by Painter in 1923. By inspection through a microscope, he counted 24 pairs of chromosomes, giving 48 in total. His error was copied by others, and it was not until 1956 that the true number (46) was determined by Indonesian-born cytogeneticist Joe Hin Tjio.
Prokaryotes.
The prokaryotes – bacteria and archaea – typically have a single circular chromosome. The chromosomes of most bacteria (also called genophores), can range in size from only 130,000 base pairs in the endosymbiotic bacteria "Candidatus Hodgkinia cicadicola" and "Candidatus Tremblaya princeps", to more than 14,000,000 base pairs in the soil-dwelling bacterium "Sorangium cellulosum".
Some bacteria have more than one chromosome. For instance, Spirochaetes such as "Borrelia burgdorferi" (causing Lyme disease), contain a single "linear" chromosome. "Vibrios" typically carry two chromosomes of very different size. Genomes of the genus "Burkholderia" carry one, two, or three chromosomes.
Structure in sequences.
Prokaryotic chromosomes have less sequence-based structure than eukaryotes. Bacteria typically have a one-point (the origin of replication) from which replication starts, whereas some archaea contain multiple replication origins. The genes in prokaryotes are often organized in operons and do not usually contain introns, unlike eukaryotes.
DNA packaging.
Prokaryotes do not possess nuclei. Instead, their DNA is organized into a structure called the nucleoid. The nucleoid is a distinct structure and occupies a defined region of the bacterial cell. This structure is, however, dynamic and is maintained and remodeled by the actions of a range of histone-like proteins, which associate with the bacterial chromosome. In archaea, the DNA in chromosomes is even more organized, with the DNA packaged within structures similar to eukaryotic nucleosomes.
Certain bacteria also contain plasmids or other extrachromosomal DNA. These are circular structures in the cytoplasm that contain cellular DNA and play a role in horizontal gene transfer. In prokaryotes and viruses, the DNA is often densely packed and organized; in the case of archaea, by homology to eukaryotic histones, and in the case of bacteria, by histone-like proteins.
Bacterial chromosomes tend to be tethered to the plasma membrane of the bacteria. In molecular biology application, this allows for its isolation from plasmid DNA by centrifugation of lysed bacteria and pelleting of the membranes (and the attached DNA).
Prokaryotic chromosomes and plasmids are, like eukaryotic DNA, generally supercoiled. The DNA must first be released into its relaxed state for access for transcription, regulation, and replication.
Eukaryotes.
Each eukaryotic chromosome consists of a long linear DNA molecule associated with proteins, forming a compact complex of proteins and DNA called "chromatin." Chromatin contains the vast majority of the DNA in an organism, but a small amount inherited maternally can be found in the mitochondria. It is present in most cells, with a few exceptions, for example, red blood cells.
Histones are responsible for the first and most basic unit of chromosome organization, the nucleosome.
Eukaryotes (cells with nuclei such as those found in plants, fungi, and animals) possess multiple large linear chromosomes contained in the cell's nucleus. Each chromosome has one centromere, with one or two arms projecting from the centromere, although, under most circumstances, these arms are not visible as such. In addition, most eukaryotes have a small circular mitochondrial genome, and some eukaryotes may have additional small circular or linear cytoplasmic chromosomes.
In the nuclear chromosomes of eukaryotes, the uncondensed DNA exists in a semi-ordered structure, where it is wrapped around histones (structural proteins), forming a composite material called chromatin.
Interphase chromatin.
The packaging of DNA into nucleosomes causes a 10 nanometer fibre which may further condense up to 30 nm fibres. Most of the euchromatin in interphase nuclei appears to be in the form of 30-nm fibers. Chromatin structure is the more decondensed state, i.e. the 10-nm conformation allows transcription.
During interphase (the period of the cell cycle where the cell is not dividing), two types of chromatin can be distinguished:
Metaphase chromatin and division.
In the early stages of mitosis or meiosis (cell division), the chromatin double helix becomes more and more condensed. They cease to function as accessible genetic material (transcription stops) and become a compact transportable form. The loops of thirty-nanometer chromatin fibers are thought to fold upon themselves further to form the compact metaphase chromosomes of mitotic cells. The DNA is thus condensed about ten-thousand-fold.
The chromosome scaffold, which is made of proteins such as condensin, TOP2A and KIF4, plays an important role in holding the chromatin into compact chromosomes. Loops of thirty-nanometer structure further condense with scaffold into higher order structures.
This highly compact form makes the individual chromosomes visible, and they form the classic four-arm structure, a pair of sister chromatids attached to each other at the centromere. The shorter arms are called "p arms" (from the French "petit", small) and the longer arms are called "q arms" ("q" follows "p" in the Latin alphabet; q-g "grande"; alternatively it is sometimes said q is short for "queue" meaning tail in French). This is the only natural context in which individual chromosomes are visible with an optical microscope.
Mitotic metaphase chromosomes are best described by a linearly organized longitudinally compressed array of consecutive chromatin loops.
During mitosis, microtubules grow from centrosomes located at opposite ends of the cell and also attach to the centromere at specialized structures called kinetochores, one of which is present on each sister chromatid. A special DNA base sequence in the region of the kinetochores provides, along with special proteins, longer-lasting attachment in this region. The microtubules then pull the chromatids apart toward the centrosomes, so that each daughter cell inherits one set of chromatids. Once the cells have divided, the chromatids are uncoiled and DNA can again be transcribed. In spite of their appearance, chromosomes are structurally highly condensed, which enables these giant DNA structures to be contained within a cell nucleus.
Human chromosomes.
Chromosomes in humans can be divided into two types: autosomes (body chromosome(s)) and allosome (sex chromosome(s)). Certain genetic traits are linked to a person's sex and are passed on through the sex chromosomes. The autosomes contain the rest of the genetic hereditary information. All act in the same way during cell division. Human cells have 23 pairs of chromosomes (22 pairs of autosomes and one pair of sex chromosomes), giving a total of 46 per cell. In addition to these, human cells have many hundreds of copies of the mitochondrial genome. Sequencing of the human genome has provided a great deal of information about each of the chromosomes. Below is a table compiling statistics for the chromosomes, based on the Sanger Institute's human genome information in the Vertebrate Genome Annotation (VEGA) database. Number of genes is an estimate, as it is in part based on gene predictions. Total chromosome length is an estimate as well, based on the estimated size of unsequenced heterochromatin regions.
Based on the micrographic characteristics of size, position of the centromere and sometimes the presence of a chromosomal satellite, the human chromosomes are classified into the following groups:
Karyotype.
In general, the karyotype is the characteristic chromosome complement of a eukaryote species. The preparation and study of karyotypes is part of cytogenetics.
Although the replication and transcription of DNA is highly standardized in eukaryotes, the same cannot be said for their karyotypes, which are often highly variable. There may be variation between species in chromosome number and in detailed organization.
In some cases, there is significant variation within species. Often there is:
1. variation between the two sexes
2. variation between the germline and soma (between gametes and the rest of the body)
3. variation between members of a population, due to balanced genetic polymorphism
4. geographical variation between races
5. mosaics or otherwise abnormal individuals.
Also, variation in karyotype may occur during development from the fertilized egg.
The technique of determining the karyotype is usually called "karyotyping". Cells can be locked part-way through division (in metaphase) in vitro (in a reaction vial) with colchicine. These cells are then stained, photographed, and arranged into a "karyogram", with the set of chromosomes arranged, autosomes in order of length, and sex chromosomes (here X/Y) at the end.
Like many sexually reproducing species, humans have special gonosomes (sex chromosomes, in contrast to autosomes). These are XX in females and XY in males.
History and analysis techniques.
Investigation into the human karyotype took many years to settle the most basic question: "How many chromosomes does a normal diploid human cell contain?" In 1912, Hans von Winiwarter reported 47 chromosomes in spermatogonia and 48 in oogonia, concluding an XX/XO sex determination mechanism. In 1922, Painter was not certain whether the diploid number of man is 46 or 48, at first favouring 46. He revised his opinion later from 46 to 48, and he correctly insisted on humans having an XX/XY system.
New techniques were needed to definitively solve the problem:
It took until 1954 before the human diploid number was confirmed as 46. Considering the techniques of Winiwarter and Painter, their results were quite remarkable. Chimpanzees, the closest living relatives to modern humans, have 48 chromosomes as do the other great apes: in humans two chromosomes fused to form chromosome 2.
Aberrations.
Chromosomal aberrations are disruptions in the normal chromosomal content of a cell. They can cause genetic conditions in humans, such as Down syndrome, although most aberrations have little to no effect. Some chromosome abnormalities do not cause disease in carriers, such as translocations, or chromosomal inversions, although they may lead to a higher chance of bearing a child with a chromosome disorder. Abnormal numbers of chromosomes or chromosome sets, called aneuploidy, may be lethal or may give rise to genetic disorders. Genetic counseling is offered for families that may carry a chromosome rearrangement.
The gain or loss of DNA from chromosomes can lead to a variety of genetic disorders. Human examples include:
Sperm aneuploidy.
Exposure of males to certain lifestyle, environmental and/or occupational hazards may increase the risk of aneuploid spermatozoa. In particular, risk of aneuploidy is increased by tobacco smoking, and occupational exposure to benzene, insecticides, and perfluorinated compounds. Increased aneuploidy is often associated with increased DNA damage in spermatozoa.
Number in various organisms.
In eukaryotes.
The number of chromosomes in eukaryotes is highly variable. It is possible for chromosomes to fuse or break and thus evolve into novel karyotypes. Chromosomes can also be fused artificially. For example, when the 16 chromosomes of yeast were fused into one giant chromosome, it was found that the cells were still viable with only somewhat reduced growth rates.
The tables below give the total number of chromosomes (including sex chromosomes) in a cell nucleus for various eukaryotes. Most are diploid, such as humans who have 22 different types of autosomes—each present as two homologous pairs—and two sex chromosomes, giving 46 chromosomes in total. Some other organisms have more than two copies of their chromosome types, for example bread wheat which is "hexaploid", having six copies of seven different chromosome types for a total of 42 chromosomes.
Normal members of a particular eukaryotic species all have the same number of nuclear chromosomes. Other eukaryotic chromosomes, i.e., mitochondrial and plasmid-like small chromosomes, are much more variable in number, and there may be thousands of copies per cell.
Asexually reproducing species have one set of chromosomes that are the same in all body cells. However, asexual species can be either haploid or diploid.
Sexually reproducing species have somatic cells (body cells) that are diploid [2n], having two sets of chromosomes (23 pairs in humans), one set from the mother and one from the father. Gametes (reproductive cells) are haploid [n], having one set of chromosomes. Gametes are produced by meiosis of a diploid germline cell, during which the matching chromosomes of father and mother can exchange small parts of themselves (crossover) and thus create new chromosomes that are not inherited solely from either parent. When a male and a female gamete merge during fertilization, a new diploid organism is formed.
Some animal and plant species are polyploid [Xn], having more than two sets of homologous chromosomes. Important crops such as tobacco or wheat are often polyploid, compared to their ancestral species. Wheat has a haploid number of seven chromosomes, still seen in some cultivars as well as the wild progenitors. The more common types of pasta and bread wheat are polyploid, having 28 (tetraploid) and 42 (hexaploid) chromosomes, compared to the 14 (diploid) chromosomes in wild wheat.
In prokaryotes.
Prokaryote species generally have one copy of each major chromosome, but most cells can easily survive with multiple copies. For example, "Buchnera", a symbiont of aphids, has multiple copies of its chromosome, ranging from 10 to 400 copies per cell. However, in some large bacteria, such as "Epulopiscium fishelsoni" up to 100,000 copies of the chromosome can be present. Plasmids and plasmid-like small chromosomes are, as in eukaryotes, highly variable in copy number. The number of plasmids in the cell is almost entirely determined by the rate of division of the plasmid – fast division causes high copy number.
|
6440
|
1301370069
|
https://en.wikipedia.org/wiki?curid=6440
|
Colonna family
|
The House of Colonna is an Italian noble family, forming part of the papal nobility. It played a pivotal role in medieval and Renaissance Rome, supplying one pope (Martin V), 23 cardinals and many other church and political leaders. Other notable family members are Vittoria Colonna, close friend of Michelangelo, Marcantonio II Colonna (Marcantonio Colonna), leader of the papal fleet in the Battle of Lepanto (1571) and Costanza Colonna, patron and protector of Caravaggio. The family was notable for its bitter feud with the Orsini family over their influence in Rome, which was eventually settled by the issuing of the papal bull "Pax Romana" by Pope Julius II in 1511. In 1571, the heads of both families married nieces of Pope Sixtus V. Thereafter, historians recorded that "no peace had been concluded between the princes of Christendom, in which they had not been included by name". Today, the family is led by Don Prospero Colonna (b.1956).
History.
Origins.
According to tradition, the Colonna family is a branch of the Counts of Tusculum — by Peter (Pietro Colonna, 1078–1108 or 1099–1151) son of Gregory III, called Peter "de Columna" (Petrus de Columna) from his property the Columna Castle in Colonna, in the Alban Hills and Lord of Colonna, Monteporzio, Zagarolo and Gallicano. Further back, they trace their lineage past the Counts of Tusculum via Lombard and Italo-Roman nobles, merchants, and clergy through the Early Middle Ages — ultimately claiming origins from the Julio-Claudian dynasty and the gens Julia whose origin is lost in the mists of time but which entered the annals for the first time in 489 BC with the consulship of Gaius Julius Iullus. Peter married Elena, Lady of Palestrina, widow of a Donodeo and relative of Pope Paschal II.
The first cardinal from the family was appointed in 1206, when Giovanni Colonna di Carbognano was made Cardinal Deacon of SS. Cosma e Damiano. For many years, Cardinal Giovanni di San Paolo (elevated in 1193) was identified as a member of the Colonna family and therefore its first representative in the College of Cardinals, but modern scholars have established that this was based on false information from the beginning of the 16th century.
Giovanni Colonna (born ) nephew of Cardinal Giovanni Colonna di Carbognano, made his solemn vows as a Dominican around 1228 and received his theological and philosophical training at the Roman "studium" of Santa Sabina, the forerunner of the Pontifical University of Saint Thomas Aquinas, "Angelicum". He served as the Provincial of the Roman province of the Dominican Order and led the provincial chapter of 1248 at Anagni. Colonna was appointed as Archbishop of Messina in 1255.
Margherita Colonna (died 1248) was a member of the Franciscan Order. She was beatified by Pope Pius IX in 1848.
At this time, a rivalry began with the pro-papal Orsini family, leaders of the Guelph faction. This reinforced the pro-Emperor Ghibelline course that the Colonna family followed throughout the period of conflict between the Papacy and the Holy Roman Empire. Ironically according to their own family legend, the Orsini are also descended from the Julio-Claudian dynasty of ancient Rome.
Colonna versus the Papacy.
In 1297, Cardinal Jacopo disinherited his brothers Ottone, Matteo, and Landolfo of their lands. The latter three appealed to Pope Boniface VIII, who ordered Jacopo to return the land, and furthermore hand over the family's strongholds of Colonna, Palestrina, and other towns to the Papacy. Jacopo refused; in May, Boniface removed him from the College of Cardinals and excommunicated him and his followers.
The Colonna family (aside from the three brothers allied with the Pope) declared that Boniface had been elected illegally following the unprecedented abdication of Pope Celestine V. The dispute led to open warfare, and in September, Boniface appointed Landolfo to the command of his army, to put down the revolt of Landolfo's own Colonna relatives. By the end of 1298, Landolfo had captured Colonna, Palestrina and other towns, and razed them to the ground. The family's lands were distributed among Landolfo and his loyal brothers; the rest of the family fled Italy.
The exiled Colonnas allied with the Pope's other great enemy, Philip IV of France, who in his youth had been tutored by Cardinal Egidio Colonna. In September 1303, Sciarra and Philipp's advisor, Guillaume de Nogaret, led a small force into Anagni to arrest Boniface VIII and bring him to France, where he was to stand trial. The two managed to apprehend the pope, and Sciarra reportedly slapped the pope in the face in the process, which was accordingly dubbed the "Outrage of Anagni". The attempt eventually failed after a few days, when locals freed the pope. However, Boniface VIII died on 11 October, allowing France to dominate his weaker successors during the Avignon papacy.
Late Middle Ages.
The family remained at the centre of civic and religious life throughout the late Middle Ages. Cardinal Egidio Colonna died at the papal court in Avignon in 1314. An Augustinian, he had studied theology in Paris under St. Thomas of Aquinas to become one of the most authoritative thinkers of his time.
In the 14th century, the family sponsored the decoration of the Church of San Giovanni, most notably the floor mosaics.
In 1328, Louis IV of Germany marched into Italy for his coronation as Holy Roman Emperor. As Pope John XXII was residing in Avignon and had publicly declared that he would not crown Louis, the King decided to be crowned by a member of the Roman aristocracy, who proposed Sciarra Colonna. In honor of this event, the Colonna family was granted the privilege of using the imperial pointed crown on top of their coat of arms.
The poet Petrarch, was a great friend of the family, in particular of Giovanni Colonna and often lived in Rome as a guest of the family. He composed a number of sonnets for special occasions within the Colonna family, including "Colonna the Glorious, the great Latin name upon which all our hopes rest". In this period, the Colonna started claiming they were descendants of the Julio-Claudian dynasty.
At the Council of Constance, the Colonna finally succeeded in their papal ambitions when Oddone Colonna was elected on 14 November 1417. As Martin V, he reigned until his death on 20 February 1431.
Early modern period.
Vittoria Colonna became famous in the sixteenth century as a poet and a figure in literate circles.
In 1627 Anna Colonna, daughter of Filippo I Colonna, married Taddeo Barberini of the family Barberini; nephew of Pope Urban VIII.
In 1728, the Carbognano branch (Colonna di Sciarra) of the Colonna family added the name Barberini to its family name when Giulio Cesare Colonna di Sciarra married Cornelia Barberini, daughter of the last male Barberini to hold the name and granddaughter of Maffeo Barberini (son of Taddeo Barberini).
Current status.
The Colonna family were Prince Assistants to the Papal Throne.
The family residence in Rome, the Palazzo Colonna, is open to the public every Friday and Saturday morning.
The main 'Colonna di Paliano' line is represented today by Prince Marcantonio Colonna di Paliano, Prince and Duke of Paliano (b. 1948), whose heir is Don Giovanni Andrea Colonna di Paliano (b. 1975), and by Don Prospero Colonna di Paliano, Prince of Avella (b. 1956), whose heir is Don Filippo Colonna di Paliano (b. 1995).
The 'Colonna di Stigliano' line is represented by Don Prospero Colonna di Stigliano, Prince of Stigliano (b. 1938), whose heir is his nephew Don Stefano Colonna di Stigliano (b. 1975).
|
6443
|
1300900768
|
https://en.wikipedia.org/wiki?curid=6443
|
Ceuta
|
Ceuta (, , ; ) is an autonomous city of Spain on the North African coast. Bordered by Morocco, it lies along the boundary between the Mediterranean Sea and the Atlantic Ocean. Ceuta is one of the special member state territories of the European Union. It was a regular municipality belonging to the province of Cádiz prior to the passing of its Statute of Autonomy in March 1995, as provided by the Spanish Constitution, henceforth becoming an autonomous city.
Ceuta, like Melilla and the Canary Islands, was classified as a free port before Spain joined the European Union. Its population is predominantly Christian and Muslim, with a small minority of Sephardic Jews and Sindhi Hindus, from Pakistan.
Spanish is the official language, while Darija Arabic is also widely spoken.
Names.
The name Abyla has been said to have been a Punic name ("Lofty Mountain" or "Mountain of God") for Jebel Musa, the southern Pillar of Hercules. The name of the mountain was in fact "Habenna" (, , "Stone" or "Stele") or "ʾAbin-ḥīq" (, , "Rock of the Bay"), about the nearby Bay of Benzú. The name was hellenized variously as "Ápini" (), "Abýla" (), "Abýlē" (), "Ablýx" (), and "Abilē Stḗlē" (, "Pillar of Abyla") and in Latin as ' ("Mount Abyla") or ' ("the Pillar of Abyla").
The settlement below Jebel Musa was later renamed for the seven hills around the site, collectively referred to as the "Seven Brothers" (; ). In particular, the Roman stronghold at the site took the name "Fort at the Seven Brothers" (). This was gradually shortened to Septem ( "Sépton") or, occasionally, Septum or Septa. These clipped forms continued as Berber "Sebta" and Arabic "Sabtan" or "Sabtah" (), which themselves became in Portuguese () and Spanish (locally ).
History.
Ancient.
Controlling access between the Atlantic Ocean and the Mediterranean Sea, the Strait of Gibraltar is an important military and commercial chokepoint. The Phoenicians realized the extremely narrow isthmus joining the Peninsula of Almina to the African mainland made Ceuta eminently defensible and established an outpost there early in the 1st millenniumBC. The Greek geographers record it by variations of "Abyla", the ancient name of nearby Jebel Musa. Beside Calpe, the other Pillar of Hercules now known as the Rock of Gibraltar, the Phoenicians established Kart at what is now San Roque, Spain. Other good anchorages nearby became Phoenician and then Carthaginian ports at what are now Tangiers and Cádiz.
After Carthage's destruction in the Punic Wars, most of northwest Africa was left to the Roman client states of Numidia andaround AbylaMauretania. Punic culture continued to thrive in what the Romans knew as "Septem". After the Battle of Thapsus in 46 BC, Caesar and his heirs began annexing North Africa directly as Roman provinces but, as late as Augustus, most of Septem's Berber residents continued to speak and write in Punic.
Caligula assassinated the Mauretanian king Ptolemy in AD40 and seized his kingdom, which Claudius organized in AD 42, placing Septem in the province of Tingitana and raising it to the level of a colony. It subsequently was Romanized and thrived into the late 3rd century, trading heavily with Roman Spain and becoming well known for its salted fish. Roads connected it overland with Tingis (Tangiers) and Volubilis. Under in the late 4th century, Septem still had 10,000 inhabitants, nearly all Christian citizens speaking African Romance, a local dialect of Latin.
Medieval.
Vandals, probably invited by Count Boniface as protection against the empress dowager, crossed the strait near Tingis around 425 and swiftly overran Roman North Africa. Their king, Gaiseric, focused his attention on the rich lands around Carthage; although the Romans eventually accepted his conquests and he continued to raid them anyway, he soon lost control of Tingis and Septem in a series of Berber revolts. When Justinian decided to reconquer the Vandal lands, his victorious general Belisarius continued along the coast, making Septem a westernmost outpost of the Byzantine Empire around 533. Unlike the former ancient Roman administration, however, Eastern Rome did not push far into the hinterland and made the more defensible Septem their regional capital in place of Tingis.
Epidemics, less capable successors, and overstretched supply lines forced a retrenchment and left Septem isolated. It is likely that its count ("") was obliged to pay homage to the Visigoth Kingdom in Spain in the early 7th century. There are no reliable contemporary accounts of the end of the Islamic conquest of the Maghreb around 710. Instead, the rapid Muslim conquest of Spain produced romances concerning Count Julian of Septem and his betrayal of Christendom in revenge for the dishonor that befell his daughter at King Roderick's court. Allegedly with Julian's encouragement and instructions, the Berber convert and freedman Tariq ibn Ziyad took his garrison from Tangiers across the strait and overran the Spanish so swiftly that both he and his master Musa bin Nusayr fell afoul of a jealous caliph, who stripped them of their wealth and titles.
After the death of Julian, sometimes also described as a king of the Ghomara Berbers, Berber converts to Islam took direct control of what they called Sebta. It was then destroyed during their great revolt against the Umayyad Caliphate around 740. Sebta subsequently remained a small village of Muslims and Christians surrounded by ruins until its resettlement in the 9th century by Mâjakas, chief of the Majkasa Berber tribe, who started the short-lived Banu Isam dynasty. His great-grandson briefly allied his tribe with the Idrisids, but Banu Isam rule ended in 931 when he abdicated in favor of Abd ar-Rahman III, the Umayyad ruler of Córdoba, Spain.
Chaos ensued with the fall of the Caliphate of Córdoba in 1031. Following this, Ceuta and Muslim Iberia were controlled by successive North African dynasties. Starting in 1084, the Almoravid Berbers ruled the region until 1147, when the Almohads conquered the land. Apart from Ibn Hud's rebellion in 1232, they ruled until the Tunisian Hafsids established control. The Hafsids' influence in the west rapidly waned, and Ceuta's inhabitants eventually expelled them in 1249. After this, a period of political instability persisted, under competing interests from the Marinids and Granada as well as autonomous rule under the native Banu al-Azafi. The Fez finally conquered the region in 1387, with assistance from Aragon.
Portuguese.
On the morning of 21 August 1415, King John I of Portugal led his sons and their assembled forces in a surprise assault that would come to be known as the Conquest of Ceuta. The battle was almost anticlimactic, because the 45,000 men who traveled on 200 Portuguese ships caught the defenders of Ceuta off guard and suffered only eight casualties. By nightfall the town was captured. On the morning of 22 August, Ceuta was in Portuguese hands. Álvaro Vaz de Almada, 1st Count of Avranches was asked to hoist what was to become the flag of Ceuta, which is identical to the flag of Lisbon, but in which the coat of arms derived from that of the Kingdom of Portugal was added to the center; the original Portuguese flag and coat of arms of Ceuta remained unchanged, and the modern-day Ceuta flag features the configuration of the Portuguese shield.
John's son Henry the Navigator distinguished himself in the battle, being wounded during the conquest. The looting of the city proved to be less profitable than expected for John I, so he decided to keep the city to pursue further enterprises in the area.
From 1415 to 1437, Pedro de Meneses became the first governor of Ceuta.
The Marinid Sultanate started the 1419 siege but was defeated by the first governor of Ceuta before reinforcements arrived in the form of John, Constable of Portugal and his brother Henry the Navigator, who were sent with troops to defend Ceuta.
Under King John I's son, Duarte, the city of Ceuta rapidly became a drain on the Portuguese treasury. Trans-Saharan trade journeyed instead to Tangier. It was soon realized that without the city of Tangier, possession of Ceuta was worthless. In 1437, Duarte's brothers Henry the Navigator and Fernando, the Saint Prince persuaded him to launch an attack on the Marinid sultanate. The resulting Battle of Tangier (1437), led by Henry, was a debacle. In the resulting treaty, Henry promised to deliver Ceuta back to the Marinids in return for allowing the Portuguese army to depart unmolested, which he reneged on.
Possession of Ceuta indirectly led to further Portuguese expansion. The main area of Portuguese expansion, at this time, was the coast of the Maghreb, where there was grain, cattle, sugar, and textiles, as well as fish, hides, wax, and honey.
Ceuta had to endure alone for 43 years, until the position of the city was consolidated with the taking of Ksar es-Seghir (1458), Arzila and Tangier (1471) by the Portuguese.
The city was recognized as a Portuguese possession by the Treaty of Alcáçovas (1479) and by the Treaty of Tordesillas (1494).
In the 1540s the Portuguese began building the Royal Walls of Ceuta as they are today including bastions, a navigable moat and a drawbridge. Some of these bastions are still standing, like the bastions of Coraza Alta, Bandera and Mallorquines.
Luís de Camões lived in Ceuta between 1549 and 1551, losing his right eye in battle, which influenced his work of poetry "Os Lusíadas".
Union between Portugal and Spain.
In 1578 King Sebastian of Portugal died at the Battle of Alcácer Quibir (known as the Battle of Three Kings) in what is today northern Morocco, without descendants, triggering the 1580 Portuguese succession crisis. His grand-uncle, the elderly Cardinal Henry, succeeded him as King, but also had no descendants, having taken holy orders. When the cardinal-king died after two years later, three grandchildren of King Manuel I of Portugal claimed the throne:
Philip prevailed and was crowned King Philip I of Portugal in 1581, uniting the two crowns and overseas empires.
During the Union with Spain, 1580 to 1640, Ceuta attracted many residents of Spanish origin and became the only city of the Portuguese Empire that sided with Spain when Portugal regained its independence in the Portuguese Restoration War of 1640.
Spanish.
On 1 January 1668, King Afonso VI of Portugal recognised the formal allegiance of Ceuta to Spain and ceded Ceuta to King Carlos II of Spain by the Treaty of Lisbon.
The city was attacked by Moroccan forces under Moulay Ismail during the Siege of Ceuta (1694–1727). During the longest siege in history, the city underwent changes leading to the loss of its Portuguese character. While most of the military operations took place around the Royal Walls of Ceuta, there were also small-scale penetrations by Spanish forces at various points on the Moroccan coast, and seizure of shipping in the Strait of Gibraltar.
During the Napoleonic Wars (1803–1815), Spain allowed Britain to occupy Ceuta. Occupation began in 1810, with Ceuta being returned at the conclusion of the wars. Disagreements regarding the border of Ceuta resulted in the Hispano-Moroccan War (1859–60), which ended at the Battle of Tetuán.
In July 1936, General Francisco Franco took command of the Spanish Army of Africa and rebelled against the Spanish republican government; his military uprising led to the Spanish Civil War of 1936–1939. Franco transported troops to mainland Spain in an airlift using transport aircraft supplied by Germany and Italy. Ceuta became one of the first battlegrounds of the uprising: General Franco's rebel nationalist forces seized Ceuta, while at the same time the city came under fire from the air and sea forces of the official republican government.
The Llano Amarillo monument was erected to honor Francisco Franco; it was inaugurated on 13 July 1940. The tall obelisk has since been abandoned, but the shield symbols of the Falange and Imperial Eagle remain visible.
Following the 1947 Partition of India, a substantial number of Sindhi Hindus from current-day Pakistan settled in Ceuta, adding to a small Hindu community that had existed in Ceuta since 1893, connected to Gibraltar's.
When Spain recognized the independence of Spanish Morocco in 1956, Ceuta and the other remained under Spanish rule. Spain considered them integral parts of the Spanish state, but Morocco has disputed this point.
Culturally, modern Ceuta is part of the Spanish region of Andalusia. It was attached to the province of Cádiz until 1995, the Spanish coast being only 20 km (12.5 miles) away. It is a cosmopolitan city, with a large ethnic Arab-Berber Muslim minority as well as Sephardic Jewish and Hindu minorities.
On 5 November 2007, King Juan Carlos I visited the city, sparking great enthusiasm from the local population and protests from the Moroccan government. It was the first time a Spanish head of state had visited Ceuta in 80 years.
Since 2010, Ceuta (and Melilla) have declared the Muslim holiday of Eid al-Adha, or Feast of the Sacrifice, an official public holiday. It is the first time a non-Christian religious festival has been officially celebrated in Spanish ruled territory since the Reconquista.
Geography.
Ceuta is separated by from the province of Cádiz on the Spanish mainland by the Strait of Gibraltar and it shares a land border with M'diq-Fnideq Prefecture in the Kingdom of Morocco. It has an area of . It is dominated by Monte Anyera, a hill along its western frontier with Morocco, which is guarded by a Spanish military fort. Monte Hacho on the Peninsula of Almina overlooking the port is one of the possible locations of the southern pillar of the Pillars of Hercules of Greek legend (the other possibility being Jebel Musa).
Important Bird Area.
The Ceuta Peninsula has been recognised as an Important Bird Area (IBA) by BirdLife International because the site is part of a migratory bottleneck, or choke point, at the western end of the Mediterranean for large numbers of raptors, storks and other birds flying between Europe and Africa. These include European honey buzzards, black kites, short-toed snake eagles, Egyptian vultures, griffon vultures, black storks, white storks and Audouin's gulls.
Climate.
Ceuta has a maritime-influenced Mediterranean climate, similar to nearby Spanish and Moroccan cities such as Tarifa, Algeciras or Tangiers. The average diurnal temperature variation is relatively low; the average annual temperature is with average yearly highs of and lows of though the Ceuta weather station has only been in operation since 2003. Ceuta has relatively mild winters for the latitude, while summers are warm yet milder than in the interior of Southern Spain, due to the moderating effect of the Straits of Gibraltar. Summers are very dry, but yearly precipitation is still at , which could be considered a humid climate if the summers were not so arid.
Government and administration.
Since 1995, Ceuta is, along with Melilla, one of the two autonomous cities of Spain.
Ceuta is known officially in Spanish as (English: "Autonomous City of Ceuta"), with a rank between a standard municipality and an autonomous community. Ceuta is part of the territory of the European Union. The city was a free port before Spain joined the European Union in 1986. Now it has a low-tax system within the Economic and Monetary Union of the European Union.
Since 1979, Ceuta has held elections to its 25-seat assembly every four years. The leader of its government was the Mayor until the Autonomy Statute provided for the new title of Mayor-President. , the People's Party (PP) won 18 seats, keeping Juan Jesús Vivas as Mayor-President, which he has been since 2001. The remaining seats are held by the regionalist Caballas Coalition (4) and the Socialist Workers' Party (PSOE, 3).
Owing to its small population, Ceuta elects only one member of the Congress of Deputies, the lower house of the "Cortes Generales" (the Spanish Parliament). election, this post is held by María Teresa López of Vox.
Ceuta is subdivided into 63 ("neighborhoods"), such as Barriada de Berizu, Barriada de P. Alfonso, Barriada del Sarchal, and El Hacho.
Ceuta maintains its own police force.
Defence and Civil Guard.
The defence of the enclave is the responsibility of the Spanish Armed Forces' General Command of Ceuta (COMGECEU). The Spanish Army's combat components of the command include:
The command also includes its headquarters battalion as well as logistics elements.
In 2023, the Spanish Navy replaced the "Aresa"-class patrol boat "P-114" in the territory with the "Rodman"-class patrol boat "Isla de León".
Ceuta itself is only distant from the main Spanish naval base at Rota on the Spanish mainland. The Spanish Air Force's Morón Air Base is also within proximity.
The Civil Guard is responsible for border security and protects both the territory's fortified land border as well as its maritime approaches against frequent, and sometimes significant, migrant incursions.
Economy.
The official currency of Ceuta is the euro. It is part of a special low tax zone in Spain. Ceuta is one of two Spanish port cities on the northern shore of Africa, along with Melilla. They are historically military strongholds, free ports, oil ports, and also fishing ports. Today the economy of the city depends heavily on its port (now in expansion) and its industrial and retail centres. Ceuta Heliport is now used to connect the city to mainland Spain by air. Lidl, Decathlon and El Corte Inglés have branches in Ceuta. There is also a casino.
Border trade between Ceuta and Morocco is active because of advantage of tax-free status. Thousands of Moroccan women are involved in the cross-border porter trade daily, as porteadoras. The Moroccan dirham is used in such trade, even though prices are marked in euros.
Transport.
The city's Port of Ceuta is connected to the Port of Algeciras across the Strait of Gibraltar by multiple daily sailings of ferries.
A single road border checkpoint to the south of Ceuta near Fnideq allows for cars and pedestrians to travel between Morocco and Spain. An additional border crossing for pedestrians exists between Benzú and Belyounech on the northern coast. The rest of the border is closed and inaccessible.
There is a bus service throughout the city, and while it does not pass into neighbouring Morocco, it services both frontier crossings.
Hospitals.
The following hospitals are located within Ceuta:
Demographics.
As of 2024, its population was 83,299.
Due to its location, Ceuta is home to a mixed ethnic and religious population. The two main religious groups are Christians and Muslims. As of 2006 approximately 50% of the population was Christian and approximately 48% Muslim. As of a 2018 estimate, around 67.8% of the city's population were born in Ceuta.
Spanish is the primary and official language of the enclave. Moroccan Arabic (Darija) is widely spoken. In 2021, the Council of Europe demanded that Spain formally recognize the language by 2023.
Religion.
Christianity has been present in Ceuta continuously from late antiquity, as evidenced by the ruins of a basilica in downtown Ceuta and accounts of the martyrdom of St. Daniel Fasanella and his Franciscans in 1227 during the Almohad Caliphate.
The town's Grand Mosque had been built over a Byzantine-era church. In 1415, the year of the city's conquest, the Portuguese converted the Grand Mosque into Ceuta Cathedral. The present form of the cathedral dates to refurbishments undertaken in the late 17th century, combining baroque and neoclassical elements. It was dedicated to StMary of the Assumption in 1726.
The Roman Catholic Diocese of Ceuta was established in 1417. It incorporated the suppressed Diocese of Tanger in 1570. The Diocese of Ceuta was a suffragan of Lisbon until 1675, when it became a suffragan of Seville. In 1851, Ceuta's administration was notionally merged into the Diocese of Cádiz and Ceuta as part of a concordat between Spain and the Holy See; the union was not actually accomplished, however, until 1879.
Small Jewish and Hindu minorities are also present in the city.
Migration.
Like Melilla, Ceuta attracts African migrants who try to use it as an entry to Europe. As a result, the enclave is surrounded by double fences that are high, and hundreds of migrants congregate near the fences waiting for a chance to cross them. The fences are regularly stormed by migrants trying to claim asylum once they enter Ceuta.
Education.
The University of Granada offers undergraduate programmes at their campus in Ceuta. Like all areas of Spain, Ceuta is also served by the National University of Distance Education (UNED).
While primary and secondary education are generally offered in Spanish only, a growing number of schools are entering the Bilingual Education Programme.
Twin towns and sister cities.
Ceuta is twinned with:
Dispute with Morocco.
The Moroccan government has repeatedly called for Spain to transfer the sovereignty of Ceuta, Melilla and the "plazas de soberanía" to Morocco, with Spain's refusal to do so serving as a major source of tension in Morocco–Spain relations. In Morocco, Ceuta is frequently referred to as the "occupied Sebtah", and the Moroccan government has argued that the city, along with other Spanish territories in the region, are colonies. One of the major arguments used by Morocco in their attempts to acquire sovereignty over Ceuta refers to the geographical position of the city, as Ceuta is an exclave surrounded by Moroccan territory and the Mediterranean Sea and has no territorial continuity with the rest of Spain. This argument was originally developed by one of the founders of the Moroccan Istiqlal Party, Alal-El Faasi, who openly advocated for Morocco to invade and occupy Ceuta and other North African territories under Spanish rule. Spain, in line with the majority of nations in the rest of the world, has never recognized Morocco's claim over Ceuta. The official position of the Spanish government is that Ceuta is an integral part of Spain, and has been since the 16th century, centuries prior to Morocco's independence from Spain and France in 1956. The majority of Ceuta's population support continued Spanish sovereignty and are opposed to Moroccan control over the territory.
In 1986, Spain joined NATO. However, Ceuta is not under NATO protection since Article 6 of the North Atlantic Treaty limits such coverage to Europe and North America and islands north of the Tropic of Cancer. However, French Algeria was explicitly included in the treaty upon France's entry. Legal experts have claimed that other articles of the treaty could cover Spanish territories in North Africa but this interpretation has not been tested in practice. During the 2022 Madrid summit, the issue of the protection of Ceuta was raised by Spain, with NATO Secretary General Jens Stoltenberg stating: "On which territories NATO protects and Ceuta and Melilla, NATO is there to protect all Allies against any threats. At the end of the day, it will always be a political decision to invoke Article 5, but rest assured NATO is there to protect and defend all Allies". On 21 December 2020, following statements made by Moroccan Prime Minister Saadeddine Othmani that Ceuta is "Moroccan as the Sahara", the Spanish government summoned the Moroccan ambassador, Karima Benyaich, to convey that Spain expects all its partners to respect the sovereignty and territorial integrity of its territory in Africa and asked for an explanation for Othmani's words.
|
6444
|
5229428
|
https://en.wikipedia.org/wiki?curid=6444
|
Cleopatra (disambiguation)
|
Cleopatra (69–30 BC) was the last active Ptolemaic ruler of Egypt before it became a Roman province.
Cleopatra may also refer to:
|
6445
|
27823944
|
https://en.wikipedia.org/wiki?curid=6445
|
Carcinogen
|
A carcinogen () is any agent that promotes the development of cancer. Carcinogens can include synthetic chemicals, naturally occurring substances, physical agents such as ionizing and non-ionizing radiation, and biologic agents such as viruses and bacteria. Most carcinogens act by creating mutations in DNA that disrupt a cell's normal processes for regulating growth, leading to uncontrolled cellular proliferation. This occurs when the cell's DNA repair processes fail to identify DNA damage allowing the defect to be passed down to daughter cells. The damage accumulates over time. This is typically a multi-step process during which the regulatory mechanisms within the cell are gradually dismantled allowing for unchecked cellular division.
The specific mechanisms for carcinogenic activity is unique to each agent and cell type. Carcinogens can be broadly categorized, however, as activation-dependent and activation-independent which relate to the agent's ability to engage directly with DNA. Activation-dependent agents are relatively inert in their original form, but are bioactivated in the body into metabolites or intermediaries capable of damaging human DNA. These are also known as "indirect-acting" carcinogens. Examples of activation-dependent carcinogens include polycyclic aromatic hydrocarbons (PAHs), heterocyclic aromatic amines, and mycotoxins. Activation-independent carcinogens, or "direct-acting" carcinogens, are those that are capable of directly damaging DNA without any modification to their molecular structure. These agents typically include electrophilic groups that react readily with the net negative charge of DNA molecules. Examples of activation-independent carcinogens include ultraviolet light, ionizing radiation and alkylating agents.
The time from exposure to a carcinogen to the development of cancer is known as the latency period. For most solid tumors in humans the latency period is between 10 and 40 years depending on cancer type. For blood cancers, the latency period may be as short as two. Due to prolonged latency periods identification of carcinogens can be challenging.
A number of organizations review and evaluate the cumulative scientific evidence regarding the potential carcinogenicity of specific substances. Foremost among these is the International Agency for Research on Cancer (IARC). IARC routinely publishes monographs in which specific substances are evaluated for their potential carcinogenicity to humans and subsequently categorized into one of four groupings: Group 1: Carcinogenic to humans, Group 2A: Probably carcinogenic to humans, Group 2B: Possibly carcinogenic to humans and Group 3: Not classifiable as to its carcinogenicity to humans. Other organizations that evaluate the carcinogenicity of substances include the National Toxicology Program of the US Public Health Service, NIOSH, the American Conference of Governmental Industrial Hygienists and others.
There are numerous sources of exposures to carcinogens including ultraviolet radiation from the sun, radon gas emitted in residential basements, environmental contaminants such as chlordecone, cigarette smoke and ingestion of some types of foods such as alcohol and processed meats. Occupational exposures represent a major source of carcinogens with an estimated 666,000 annual fatalities worldwide attributable to work related cancers. According to NIOSH, 3-6% of cancers worldwide are due to occupational exposures. Well established occupational carcinogens include vinyl chloride and hemangiosarcoma of the liver, benzene and leukemia, aniline dyes and bladder cancer, asbestos and mesothelioma, polycyclic aromatic hydrocarbons and scrotal cancer among chimney sweeps to name a few.
Radiation.
Ionizing Radiation.
CERCLA identifies all radionuclides as carcinogens, although the nature of the emitted radiation (alpha, beta, gamma, or neutron and the radioactive strength), its consequent capacity to cause ionization in tissues, and the magnitude of radiation exposure, determine the potential hazard. Carcinogenicity of radiation depends on the type of radiation, type of exposure, and penetration. For example, alpha radiation has low penetration and is not a hazard outside the body, but emitters are carcinogenic when inhaled or ingested. For example, Thorotrast, a (incidentally radioactive) suspension previously used as a contrast medium in x-ray diagnostics, is a potent human carcinogen known because of its retention within various organs and persistent emission of alpha particles. Low-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer.
Non-ionizing radiation.
Not all types of electromagnetic radiation are carcinogenic. Low-energy waves on the electromagnetic spectrum including radio waves, microwaves, infrared radiation and visible light are thought not to be, because they have insufficient energy to break chemical bonds. Evidence for carcinogenic effects of non-ionizing radiation is generally inconclusive, though there are some documented cases of radar technicians with prolonged high exposure experiencing significantly higher cancer incidence.
Higher-energy radiation, including ultraviolet radiation (present in sunlight) generally "is" carcinogenic, if received in sufficient doses. For most people, ultraviolet radiations from sunlight is the most common cause of skin cancer. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years.
Substances or foods irradiated with electrons or electromagnetic radiation (such as microwave, X-ray or gamma) are not carcinogenic. In contrast, non-electromagnetic neutron radiation produced inside nuclear reactors can produce secondary radiation through nuclear transmutation.
Common carcinogens associated with food.
Alcohol.
Alcohol is a carcinogen of the head and neck, esophagus, liver, colon and rectum, and breast. It has a synergistic effect with tobacco smoke in the development of head and neck cancers. In the United States approximately 6% of cancers and 4% of cancer deaths are attributable to alcohol use.
Processed meats.
Chemicals used in processed and cured meat such as some brands of bacon, sausages and ham may produce carcinogens. For example, nitrites used as food preservatives in cured meat such as bacon have also been noted as being carcinogenic with demographic links, but not causation, to colon cancer.
Meats cooked at high temperatures.
Cooking food at high temperatures, for example grilling or barbecuing meats, may also lead to the formation of minute quantities of many potent carcinogens that are comparable to those found in cigarette smoke (i.e., benzo["a"]pyrene). Charring of food looks like coking and tobacco pyrolysis, and produces carcinogens. There are several carcinogenic pyrolysis products, such as polynuclear aromatic hydrocarbons, which are converted by human enzymes into epoxides, which attach permanently to DNA. Pre-cooking meats in a microwave oven for 2–3 minutes before grilling shortens the time on the hot pan, and removes heterocyclic amine (HCA) precursors, which can help minimize the formation of these carcinogens.
Acrylamide in foods.
Frying, grilling or broiling food at high temperatures, especially starchy foods, until a toasted crust is formed generates acrylamides. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth".
Biologic Agents.
Several biologic agents are known carcinogens.
Aflatoxin B1, a toxin produced by the fungus "Aspergillus flavus" which is a common contaminant of stored grains and nuts is a known cause of hepatocellular cancer. The bacteria H. Pylori is known to cause stomach cancer and MALT lymphoma. Hepatitis B and C are associated with the development of hepatocellular cancer. HPV is the primary cause of cervical cancer.
Cigarette smoke.
Tobacco smoke contains at least 70 known carcinogens and is implicated in the development of numerous types of cancers including cancers of the lung, larynx, esophagus, stomach, kidney, pancreas, liver, bladder, cervix, colon, rectum and blood. Potent carcinogens found in cigarette smoke include polycyclic aromatic hydrocarbons (PAH, such as benzo(a)pyrene), benzene, and nitrosamine.
Occupational carcinogens.
Given that populations of workers are more likely to have consistent, often high level exposures to chemicals rarely encountered in normal life, much of the evidence for the carcinogenicity of specific agents is derived from studies of workers.
Selected carcinogens
Mechanisms of carcinogenicity.
Carcinogens can be classified as genotoxic or nongenotoxic. Genotoxins cause irreversible genetic damage or mutations by binding to DNA. Genotoxins include chemical agents like N-nitroso-N-methylurea (NMU) or non-chemical agents such as ultraviolet light and ionizing radiation. Certain viruses can also act as carcinogens by interacting with DNA.
Nongenotoxins do not directly affect DNA but act in other ways to promote growth. These include hormones and some organic compounds.
Classification.
International Agency for Research on Cancer.
The International Agency for Research on Cancer (IARC) is an intergovernmental agency established in 1965, which forms part of the World Health Organization of the United Nations. It is based in Lyon, France. Since 1971 it has published a series of "Monographs on the Evaluation of Carcinogenic Risks to Humans" that have been highly influential in the classification of possible carcinogens.
Globally Harmonized System.
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is a United Nations initiative to attempt to harmonize the different systems of assessing chemical risk which currently exist (as of March 2009) around the world. It classifies carcinogens into two categories, of which the first may be divided again into subcategories if so desired by the competent regulatory authority:
U.S. National Toxicology Program.
The National Toxicology Program of the U.S. Department of Health and Human Services is mandated to produce a biennial "Report on Carcinogens". As of August 2024, the latest edition was the 15th report (2021). It classifies carcinogens into two groups:
American Conference of Governmental Industrial Hygienists.
The American Conference of Governmental Industrial Hygienists (ACGIH) is a private organization best known for its publication of threshold limit values (TLVs) for occupational exposure and monographs on workplace chemical hazards. It assesses carcinogenicity as part of a wider assessment of the occupational hazards of chemicals.
European Union.
The European Union classification of carcinogens is contained in the Regulation (EC) No 1272/2008. It consists of three categories:
The former European Union classification of carcinogens was contained in the Dangerous Substances Directive and the Dangerous Preparations Directive. It also consisted of three categories:
This assessment scheme is being phased out in favor of the GHS scheme (see above), to which it is very close in category definitions.
Safe Work Australia.
Under a previous name, the NOHSC, in 1999 Safe Work Australia published the Approved Criteria for Classifying Hazardous Substances [NOHSC:1008(1999)].
Section 4.76 of this document outlines the criteria for classifying carcinogens as approved by the Australian government. This classification consists of three categories:
Major carcinogens implicated in the four most common cancers worldwide.
In this section, the carcinogens implicated as the main causative agents of the four most common cancers worldwide are briefly described. These four cancers are lung, breast, colon, and stomach cancers. Together they account for about 41% of worldwide cancer incidence and 42% of cancer deaths (for more detailed information on the carcinogens implicated in these and other cancers, see references).
Lung cancer.
Lung cancer (pulmonary carcinoma) is the most common cancer in the world, both in terms of cases (1.6 million cases; 12.7% of total cancer cases) and deaths (1.4 million deaths; 18.2% of total cancer deaths). Lung cancer is largely caused by tobacco smoke. Risk estimates for lung cancer in the United States indicate that tobacco smoke is responsible for 90% of lung cancers. Other factors are implicated in lung cancer, and these factors can interact synergistically with smoking so that total attributable risk adds up to more than 100%. These factors include occupational exposure to carcinogens (about 9-15%), radon (10%) and outdoor air pollution (1-2%).
Tobacco smoke is a complex mixture of more than 5,300 identified chemicals. The most important carcinogens in tobacco smoke have been determined by a "Margin of Exposure" approach. Using this approach, the most important tumorigenic compounds in tobacco smoke were, in order of importance, acrolein, formaldehyde, acrylonitrile, 1,3-butadiene, cadmium, acetaldehyde, ethylene oxide, and isoprene. Most of these compounds cause DNA damage by forming DNA adducts or by inducing other alterations in DNA. DNA damages are subject to error-prone DNA repair or can cause replication errors. Such errors in repair or replication can result in mutations in tumor suppressor genes or oncogenes leading to cancer.
Breast cancer.
Breast cancer is the second most common cancer [(1.4 million cases, 10.9%), but ranks 5th as cause of death (458,000, 6.1%)]. Increased risk of breast cancer is associated with persistently elevated blood levels of estrogen. Estrogen appears to contribute to breast carcinogenesis by three processes; (1) the metabolism of estrogen to genotoxic, mutagenic carcinogens, (2) the stimulation of tissue growth, and (3) the repression of phase II detoxification enzymes that metabolize ROS leading to increased oxidative DNA damage.
The major estrogen in humans, estradiol, can be metabolized to quinone derivatives that form adducts with DNA. These derivatives can cause depurination, the removal of bases from the phosphodiester backbone of DNA, followed by inaccurate repair or replication of the apurinic site leading to mutation and eventually cancer. This genotoxic mechanism may interact in synergy with estrogen receptor-mediated, persistent cell proliferation to ultimately cause breast cancer. Genetic background, dietary practices and environmental factors also likely contribute to the incidence of DNA damage and breast cancer risk.
Consumption of alcohol has also been linked to an increased risk for breast cancer.
Colon cancer.
Colorectal cancer is the third most common cancer [1.2 million cases (9.4%), 608,000 deaths (8.0%)]. Tobacco smoke may be responsible for up to 20% of colorectal cancers in the United States. In addition, substantial evidence implicates bile acids as an important factor in colon cancer. Twelve studies (summarized in Bernstein et al.) indicate that the bile acids deoxycholic acid (DCA) or lithocholic acid (LCA) induce production of DNA-damaging reactive oxygen species or reactive nitrogen species in human or animal colon cells. Furthermore, 14 studies showed that DCA and LCA induce DNA damage in colon cells. Also 27 studies reported that bile acids cause programmed cell death (apoptosis).
Increased apoptosis can result in selective survival of cells that are resistant to induction of apoptosis. Colon cells with reduced ability to undergo apoptosis in response to DNA damage would tend to accumulate mutations, and such cells may give rise to colon cancer. Epidemiologic studies have found that fecal bile acid concentrations are increased in populations with a high incidence of colon cancer. Dietary increases in total fat or saturated fat result in elevated DCA and LCA in feces and elevated exposure of the colon epithelium to these bile acids. When the bile acid DCA was added to the standard diet of wild-type mice invasive colon cancer was induced in 56% of the mice after 8 to 10 months. Overall, the available evidence indicates that DCA and LCA are centrally important DNA-damaging carcinogens in colon cancer.
Stomach cancer.
Stomach cancer is the fourth most common cancer [990,000 cases (7.8%), 738,000 deaths (9.7%)]. "Helicobacter pylori" infection is the main causative factor in stomach cancer. Chronic gastritis (inflammation) caused by "H. pylori" is often long-standing if not treated. Infection of gastric epithelial cells with "H. pylori" results in increased production of reactive oxygen species (ROS). ROS cause oxidative DNA damage including the major base alteration 8-hydroxydeoxyguanosine (8-OHdG). 8-OHdG resulting from ROS is increased in chronic gastritis. The altered DNA base can cause errors during DNA replication that have mutagenic and carcinogenic potential. Thus "H. pylori"-induced ROS appear to be the major carcinogens in stomach cancer because they cause oxidative DNA damage leading to carcinogenic mutations.
Diet is also thought to be a contributing factor in stomach cancer: in Japan, where very salty pickled foods are popular, the incidence of stomach cancer is high. Preserved meat such as bacon, sausages, and ham increases the risk, while a diet rich in fresh fruit, vegetables, peas, beans, grains, nuts, seeds, herbs, and spices will reduce the risk. The risk also increases with age.
|
6446
|
7903804
|
https://en.wikipedia.org/wiki?curid=6446
|
Camouflage
|
Camouflage is the use of any combination of materials, coloration, or illumination for concealment, either by making animals or objects hard to see, or by disguising them as something else. Examples include the leopard's spotted coat, the battledress of a modern soldier, and the leaf-mimic katydid's wings. A third approach, motion dazzle, confuses the observer with a conspicuous pattern, making the object visible but momentarily harder to locate. The majority of camouflage methods aim for crypsis, often through a general resemblance to the background, high contrast disruptive coloration, eliminating shadow, and countershading. In the open ocean, where there is no background, the principal methods of camouflage are transparency, silvering, and countershading, while the ability to produce light is among other things used for counter-illumination on the undersides of cephalopods such as squid. Some animals, such as chameleons and octopuses, are capable of actively changing their skin pattern and colours, whether for camouflage or for signalling. It is possible that some plants use camouflage to evade being eaten by herbivores.
Military camouflage was spurred by the increasing range and accuracy of firearms in the 19th century. In particular the replacement of the inaccurate musket with the rifle made personal concealment in battle a survival skill. In the 20th century, military camouflage developed rapidly, especially during the World War I. On land, artists such as André Mare designed camouflage schemes and observation posts disguised as trees. At sea, merchant ships and troop carriers were painted in dazzle patterns that were highly visible, but designed to confuse enemy submarines as to the target's speed, range, and heading. During and after World War II, a variety of camouflage schemes were used for aircraft and for ground vehicles in different theatres of war. The use of radar since the mid-20th century has largely made camouflage for fixed-wing military aircraft obsolete.
Non-military use of camouflage includes making cell telephone towers less obtrusive and helping hunters to approach wary game animals. Patterns derived from military camouflage are frequently used in fashion clothing, exploiting their strong designs and sometimes their symbolism. Camouflage themes recur in modern art, and both figuratively and literally in science fiction and works of literature.
History.
Classical antiquity.
In ancient Greece, Aristotle (384–322 BC) commented on the colour-changing abilities, both for camouflage and for signalling, of cephalopods including the octopus, in his "Historia animalium":
Zoology.
Camouflage has been a topic of interest and research in zoology for well over a century. According to Charles Darwin's 1859 theory of natural selection, features such as camouflage evolved by providing individual animals with a reproductive advantage, enabling them to leave more offspring, on average, than other members of the same species. In his "Origin of Species", Darwin wrote:
The English zoologist Edward Bagnall Poulton studied animal coloration, especially camouflage. In his 1890 book "The Colours of Animals", he classified different types such as "special protective resemblance" (where an animal looks like another object), or "general aggressive resemblance" (where a predator blends in with the background, enabling it to approach prey). His experiments showed that swallow-tailed moth pupae were camouflaged to match the backgrounds on which they were reared as larvae. Poulton's "general protective resemblance" was at that time considered to be the main method of camouflage, as when Frank Evers Beddard wrote in 1892 that "tree-frequenting animals are often green in colour. Among vertebrates numerous species of parrots, iguanas, tree-frogs, and the green tree-snake are examples". Beddard did however briefly mention other methods, including the "alluring coloration" of the flower mantis and the possibility of a different mechanism in the orange tip butterfly. He wrote that "the scattered green spots upon the under surface of the wings might have been intended for a rough sketch of the small flowerets of the plant [an umbellifer], so close is their mutual resemblance." He also explained the coloration of sea fish such as the mackerel: "Among pelagic fish it is common to find the upper surface dark-coloured and the lower surface white, so that the animal is inconspicuous when seen either from above or below."
The artist Abbott Handerson Thayer formulated what is sometimes called Thayer's Law, the principle of countershading. However, he overstated the case in the 1909 book "Concealing-Coloration in the Animal Kingdom", arguing that "All patterns and colors whatsoever of all animals that ever preyed or are preyed on are under certain normal circumstances obliterative" (that is, cryptic camouflage), and that "Not one 'mimicry' mark, not one 'warning color'... nor any 'sexually selected' color, exists anywhere in the world where there is not every reason to believe it the very best conceivable device for the concealment of its wearer", and using paintings such as "Peacock in the Woods" (1907) to reinforce his argument. Thayer was roundly mocked for these views by critics including Teddy Roosevelt.
The English zoologist Hugh Cott's 1940 book "Adaptive Coloration in Animals" corrected Thayer's errors, sometimes sharply: "Thus we find Thayer straining the theory to a fantastic extreme in an endeavour to make it cover almost every type of coloration in the animal kingdom." Cott built on Thayer's discoveries, developing a comprehensive view of camouflage based on "maximum disruptive contrast", countershading and hundreds of examples. The book explained how disruptive camouflage worked, using streaks of boldly contrasting colour, paradoxically making objects less visible by breaking up their outlines. While Cott was more systematic and balanced in his view than Thayer, and did include some experimental evidence on the effectiveness of camouflage, his 500-page textbook was, like Thayer's, mainly a natural history narrative which illustrated theories with examples.
Experimental evidence that camouflage helps prey avoid being detected by predators was first provided in 2016, when ground-nesting birds (plovers and coursers) were shown to survive according to how well their egg contrast matched the local environment.
Evolution.
As there is a lack of evidence for camouflage in the fossil record, studying the evolution of camouflage strategies is very difficult. Furthermore, camouflage traits must be both adaptable (provide a fitness gain in a given environment) and heritable (in other words, the trait must undergo positive selection). Thus, studying the evolution of camouflage strategies requires an understanding of the genetic components and various ecological pressures that drive crypsis.
Fossil history.
Camouflage is a soft-tissue feature that is rarely preserved in the fossil record, but rare fossilised skin samples from the Cretaceous period show that some marine reptiles were countershaded. The skins, pigmented with dark-coloured eumelanin, reveal that both leatherback turtles and mosasaurs had dark backs and light bellies. There is fossil evidence of camouflaged insects going back over 100 million years, for example lacewings larvae that stick debris all over their bodies much as their modern descendants do, hiding them from their prey. Dinosaurs appear to have been camouflaged, as a 120 million year old fossil of a "Psittacosaurus" has been preserved with countershading.
Genetics.
Camouflage does not have a single genetic origin. However, studying the genetic components of camouflage in specific organisms illuminates the various ways that crypsis can evolve among lineages. Many cephalopods have the ability to actively camouflage themselves, controlling crypsis through neural activity. For example, the genome of the common cuttlefish includes 16 copies of the reflectin gene, which grants the organism remarkable control over coloration and iridescence. The reflectin gene is thought to have originated through transposition from symbiotic "Aliivibrio fischeri" bacteria, which provide bioluminescence to its hosts. While not all cephalopods use active camouflage, ancient cephalopods may have inherited the gene horizontally from symbiotic "A. fischeri", with divergence occurred through subsequent gene duplication (such as in the case of "Sepia officinalis") or gene loss (as with cephalopods with no active camouflage capabilities).[3] This is unique as an instance of camouflage arising as an instance of horizontal gene transfer from an endosymbiont. However, other methods of horizontal gene transfer are common in the evolution of camouflage strategies in other lineages. Peppered moths and walking stick insects both have camouflage-related genes that stem from transposition events.
The Agouti genes are orthologous genes involved in camouflage across many lineages. They produce yellow and red coloration (phaeomelanin), and work in competition with other genes that produce black (melanin) and brown (eumelanin) colours. In eastern deer mice, over a period of about 8000 years the single agouti gene developed 9 mutations that each made expression of yellow fur stronger under natural selection, and largely eliminated melanin-coding black fur coloration. On the other hand, all black domesticated cats have deletions of the agouti gene that prevent its expression, meaning no yellow or red color is produced. The evolution, history and widespread scope of the agouti gene shows that different organisms often rely on orthologous or even identical genes to develop a variety of camouflage strategies.
Ecology.
While camouflage can increase an organism's fitness, it has genetic and energetic costs. There is a trade-off between detectability and mobility. Species camouflaged to fit a specific microhabitat are less likely to be detected when in that microhabitat, but must spend energy to reach, and sometimes to remain in, such areas. Outside the microhabitat, the organism has a higher chance of detection. Generalized camouflage allows species to avoid predation over a wide range of habitat backgrounds, but is less effective. The development of generalized or specialized camouflage strategies is highly dependent on the biotic and abiotic composition of the surrounding environment.
There are many examples of the tradeoffs between specific and general cryptic patterning. "Phestilla melanocrachia", a species of nudibranch that feeds on stony coral, utilizes specific cryptic patterning in reef ecosystems. The nudibranch syphons pigments from the consumed coral into the epidermis, adopting the same shade as the consumed coral. This allows the nudibranch to change colour (mostly between black and orange) depending on the coral system that it inhabits. However, "P. melanocrachia" can only feed and lay eggs on the branches of host-coral, "Platygyra carnosa", which limits the geographical range and efficacy in nudibranch nutritional crypsis. Furthermore, the nudibranch colour change is not immediate, and switching between coral hosts when in search for new food or shelter can be costly.
The costs associated with distractive or disruptive crypsis are more complex than the costs associated with background matching. Disruptive patterns distort the body outline, making it harder to precisely identify and locate. However, disruptive patterns result in higher predation. Disruptive patterns that specifically involve visible symmetry (such as in some butterflies) reduce survivability and increase predation. Some researchers argue that because wing-shape and color pattern are genetically linked, it is genetically costly to develop asymmetric wing colorations that would enhance the efficacy of disruptive cryptic patterning. Symmetry does not carry a high survival cost for butterflies and moths that their predators views from above on a homogeneous background, such as the bark of a tree. On the other hand, natural selection drives species with variable backgrounds and habitats to move symmetrical patterns away from the centre of the wing and body, disrupting their predators' symmetry recognition.
Principles.
Camouflage can be achieved by different methods. Most of the methods help to hide against a background; but mimesis and motion dazzle protect without hiding. Methods may be applied on their own or in combination. Many mechanisms are visual, but some research has explored the use of techniques against olfactory (scent) and acoustic (sound) detection. Methods may also apply to military equipment.
Background matching.
Some animals' colours and patterns match a particular natural background. This is an important component of camouflage in all environments. For instance, tree-dwelling parakeets are mainly green; woodcocks of the forest floor are brown and speckled; reedbed bitterns are streaked brown and buff; in each case the animal's coloration matches the hues of its habitat. Similarly, desert animals are almost all desert coloured in tones of sand, buff, ochre, and brownish grey, whether they are mammals like the gerbil or fennec fox, birds such as the desert lark or sandgrouse, or reptiles like the skink or horned viper. Military uniforms, too, generally resemble their backgrounds; for example khaki uniforms are a muddy or dusty colour, originally chosen for service in South Asia. Many moths show industrial melanism, including the peppered moth which has coloration that blends in with tree bark. The coloration of these insects evolved between 1860 and 1940 to match the changing colour of the tree trunks on which they rest, from pale and mottled to almost black in polluted areas. This is taken by zoologists as evidence that camouflage is influenced by natural selection, as well as demonstrating that it changes where necessary to resemble the local background.
Disruptive coloration.
Disruptive patterns use strongly contrasting, non-repeating markings such as spots or stripes to break up the outlines of an animal or military vehicle, or to conceal telltale features, especially by masking the eyes, as in the common frog. Disruptive patterns may use more than one method to defeat visual systems such as edge detection. Predators like the leopard use disruptive camouflage to help them approach prey, while potential prey use it to avoid detection by predators. Disruptive patterning is common in military usage, both for uniforms and for military vehicles. Disruptive patterning, however, does not always achieve crypsis on its own, as an animal or a military target may be given away by factors like shape, shine, and shadow.
The presence of bold skin markings does not in itself prove that an animal relies on camouflage, as that depends on its behaviour. For example, although giraffes have a high contrast pattern that could be disruptive coloration, the adults are very conspicuous when in the open. Some authors have argued that adult giraffes are cryptic, since when standing among trees and bushes they are hard to see at even a few metres' distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves, even from lions, rather than on camouflage. A different explanation is implied by young giraffes being far more vulnerable to predation than adults. More than half of all giraffe calves die within a year, and giraffe mothers hide their newly born calves, which spend much of the time lying down in cover while their mothers are away feeding. The mothers return once a day to feed their calves with milk. Since the presence of a mother nearby does not affect survival, it is argued that these juvenile giraffes must be very well camouflaged; this is supported by coat markings being strongly inherited.
The possibility of camouflage in plants was little studied until the late 20th century. Leaf variegation with white spots may serve as camouflage in forest understory plants, where there is a dappled background; leaf mottling is correlated with closed habitats. Disruptive camouflage would have a clear evolutionary advantage in plants: they would tend to escape from being eaten by herbivores. Another possibility is that some plants have leaves differently coloured on upper and lower surfaces or on parts such as veins and stalks to make green-camouflaged insects conspicuous, and thus benefit the plants by favouring the removal of herbivores by carnivores. These hypotheses are testable.
Countershading.
Countershading uses graded colour to counteract the effect of self-shadowing, creating an illusion of flatness. Self-shadowing makes an animal appear darker below than on top, grading from light to dark; countershading 'paints in' tones which are darkest on top, lightest below, making the countershaded animal nearly invisible against a suitable background. Thayer observed that "Animals are painted by Nature, darkest on those parts which tend to be most lighted by the sky's light, and "vice versa"". Accordingly, the principle of countershading is sometimes called Thayer's Law. Countershading is widely used by terrestrial animals, such as gazelles and grasshoppers; marine animals, such as sharks and dolphins; and birds, such as snipe and dunlin.
Countershading is less often used for military camouflage, despite Second World War experiments that showed its effectiveness. English zoologist Hugh Cott encouraged the use of methods including countershading, but despite his authority on the subject, failed to persuade the British authorities. Soldiers often wrongly viewed camouflage netting as a kind of invisibility cloak, and they had to be taught to look at camouflage practically, from an enemy observer's viewpoint. At the same time in Australia, zoologist William John Dakin advised soldiers to copy animals' methods, using their instincts for wartime camouflage.
The term countershading has a second meaning unrelated to Thayer's Law. It is that the upper and undersides of animals such as sharks, and of some military aircraft, are different colours to match the different backgrounds when seen from above or from below. Here the camouflage consists of two surfaces, each with the simple function of providing concealment against a specific background, such as a bright water surface or the sky. The body of a shark or the fuselage of an aircraft is not gradated from light to dark to appear flat when seen from the side. The camouflage methods used are the matching of background colour and pattern, and disruption of outlines.
Eliminating shadow.
Some animals, such as the horned lizards of North America, have evolved elaborate measures to eliminate shadow. Their bodies are flattened, with the sides thinning to an edge; the animals habitually press their bodies to the ground; and their sides are fringed with white scales which effectively hide and disrupt any remaining areas of shadow there may be under the edge of the body. The theory that the body shape of the horned lizards which live in open desert is adapted to minimise shadow is supported by the one species which lacks fringe scales, the roundtail horned lizard, which lives in rocky areas and resembles a rock. When this species is threatened, it makes itself look as much like a rock as possible by curving its back, emphasizing its three-dimensional shape. Some species of butterflies, such as the speckled wood, "Pararge aegeria", minimise their shadows when perched by closing the wings over their backs, aligning their bodies with the sun, and tilting to one side towards the sun, so that the shadow becomes a thin inconspicuous line rather than a broad patch. Similarly, some ground-nesting birds, including the European nightjar, select a resting position facing the sun. Eliminating shadow was identified as a principle of military camouflage during the Second World War.
Distraction.
Many prey animals have conspicuous high-contrast markings which paradoxically attract the predator's gaze. These distractive markings may serve as camouflage by distracting the predator's attention from recognising the prey as a whole, for example by keeping the predator from identifying the prey's outline. Experimentally, search times for blue tits increased when artificial prey had distractive markings.
Cryptic behaviour.
Movement catches the eye of prey animals on the lookout for predators, and of predators hunting for prey. Most methods of crypsis therefore also require suitable cryptic behaviour, such as lying down and keeping still to avoid being detected, or in the case of stalking predators such as the tiger, moving with extreme stealth, both slowly and quietly, watching its prey for any sign they are aware of its presence. As an example of the combination of behaviours and other methods of crypsis involved, young giraffes seek cover, lie down, and keep still, often for hours until their mothers return; their skin pattern blends with the pattern of the vegetation, while the chosen cover and lying position together hide the animals' shadows. The flat-tail horned lizard similarly relies on a combination of methods: it is adapted to lie flat in the open desert, relying on stillness, its cryptic coloration, and concealment of its shadow to avoid being noticed by predators. In the ocean, the leafy sea dragon sways mimetically, like the seaweeds amongst which it rests, as if rippled by wind or water currents. Swaying is seen also in some insects, like Macleay's spectre stick insect, "Extatosoma tiaratum". The behaviour may be motion crypsis, preventing detection, or motion masquerade, promoting misclassification (as something other than prey), or a combination of the two.
Motion camouflage.
Most forms of camouflage are ineffective when the camouflaged animal or object moves, because the motion is easily seen by the observing predator, prey or enemy. However, insects such as hoverflies and dragonflies use motion camouflage: the hoverflies to approach possible mates, and the dragonflies to approach rivals when defending territories. Motion camouflage is achieved by moving so as to stay on a straight line between the target and a fixed point in the landscape; the pursuer thus appears not to move, but only to loom larger in the target's field of vision. Some insects sway while moving to appear to be blown by the wind.
The same method can be used for military purposes, for example by missiles to minimise their risk of detection by an enemy. However, missile engineers, and animals such as bats, use the method mainly for its efficiency rather than camouflage.
Mimesis.
In mimesis (also called "masquerade"), the camouflaged object looks like something else which is of no special interest to the observer. Mimesis is common in prey animals, for example when a peppered moth caterpillar mimics a twig, or a grasshopper mimics a dry leaf. It is also found in nest structures; some eusocial wasps, such as "Leipomeles dorsata", build a nest envelope in patterns that mimic the leaves surrounding the nest.
Mimesis is also employed by some predators and parasites to lure their prey. For example, a flower mantis mimics a particular kind of flower, such as an orchid. This tactic has occasionally been used in warfare, for example with heavily armed Q-ships disguised as merchant ships.
The common cuckoo, a brood parasite, provides examples of mimesis both in the adult and in the egg. The female lays her eggs in nests of other, smaller species of bird, one per nest. The female mimics a sparrowhawk. The resemblance is sufficient to make small birds take action to avoid the apparent predator. The female cuckoo then has time to lay her egg in their nest without being seen to do so. The cuckoo's egg mimics the eggs of the host species, reducing its chance of being rejected.
Motion dazzle.
Most forms of camouflage are made ineffective by movement: a deer or grasshopper may be highly cryptic when motionless, but instantly seen when it moves. But one method, motion dazzle, requires rapidly moving bold patterns of contrasting stripes. Motion dazzle may degrade predators' ability to estimate the prey's speed and direction accurately, giving the prey an improved chance of escape. Motion dazzle distorts speed perception and is most effective at high speeds; stripes can also distort perception of size (and so, perceived range to the target). As of 2011, motion dazzle had been proposed for military vehicles, but never applied. Since motion dazzle patterns would make animals more difficult to locate accurately when moving, but easier to see when stationary, there would be an evolutionary trade-off between motion dazzle and crypsis.
An animal that is commonly thought to be dazzle-patterned is the zebra. The bold stripes of the zebra have been claimed to be disruptive camouflage, background-blending and countershading. After many years in which the purpose of the coloration was disputed, an experimental study by Tim Caro suggested in 2012 that the pattern reduces the attractiveness of stationary models to biting flies such as horseflies and tsetse flies. However, a simulation study by Martin How and Johannes Zanker in 2014 suggests that when moving, the stripes may confuse observers, such as mammalian predators and biting insects, by two visual illusions: the wagon-wheel effect, where the perceived motion is inverted, and the barberpole illusion, where the perceived motion is in a wrong direction.
Mechanisms.
Animals can camouflage themselves by one or more principles using a variety of mechanisms. For example, some animals achieve background matching by changing their skin coloration to resemble their current background.
Changeable skin coloration.
Animals such as chameleon, frog, flatfish such as the peacock flounder, squid, octopus and even the isopod idotea balthica actively change their skin patterns and colours using special chromatophore cells to resemble their current background, or, as in most chameleons, for signalling. However, Smith's dwarf chameleon does use active colour change for camouflage.
Each chromatophore contains pigment of only one colour. In fish and frogs, colour change is mediated by a type of chromatophore known as melanophores that contain dark pigment. A melanophore is star-shaped; it contains many small pigmented organelles which can be dispersed throughout the cell, or aggregated near its centre. When the pigmented organelles are dispersed, the cell makes a patch of the animal's skin appear dark; when they are aggregated, most of the cell, and the animal's skin, appears light. In frogs, the change is controlled relatively slowly, mainly by hormones. In fish, the change is controlled by the brain, which sends signals directly to the chromatophores, as well as producing hormones.
The skins of cephalopods such as the octopus contain complex units, each consisting of a chromatophore with surrounding muscle and nerve cells. The cephalopod chromatophore has all its pigment grains in a small elastic sac, which can be stretched or allowed to relax under the control of the brain to vary its opacity. By controlling chromatophores of different colours, cephalopods can rapidly change their skin patterns and colours.
On a longer timescale, animals like the Arctic hare, Arctic fox, stoat, and rock ptarmigan have snow camouflage, changing their coat colour (by moulting and growing new fur or feathers) from brown or grey in the summer to white in the winter; the Arctic fox is the only species in the dog family to do so. However, Arctic hares which live in the far north of Canada, where summer is very short, remain white year-round.
The principle of varying coloration either rapidly or with the changing seasons has military applications. "Active camouflage" could in theory make use of both dynamic colour change and counterillumination. Simple methods such as changing uniforms and repainting vehicles for winter have been in use since World War II. In 2011, BAE Systems announced their Adaptiv infrared camouflage technology. It uses about 1,000 hexagonal panels to cover the sides of a tank. The Peltier plate panels are heated and cooled to match either the vehicle's surroundings (crypsis), or an object such as a car (mimesis), when viewed in infrared.
Self-decoration.
Some animals actively seek to hide by decorating themselves with materials such as twigs, sand, or pieces of shell from their environment, to break up their outlines, to conceal the features of their bodies, and to match their backgrounds. For example, a caddisfly larva builds a decorated case and lives almost entirely inside it; a decorator crab covers its back with seaweed, sponges, and stones. The nymph of the predatory masked bug uses its hind legs and a 'tarsal fan' to decorate its body with sand or dust. There are two layers of bristles (trichomes) over the body. On these, the nymph spreads an inner layer of fine particles and an outer layer of coarser particles. The camouflage may conceal the bug from both predators and prey.
Similar principles can be applied for military purposes, for instance when a sniper wears a ghillie suit designed to be further camouflaged by decoration with materials such as tufts of grass from the sniper's immediate environment. Such suits were used as early as 1916, the British army having adopted "coats of motley hue and stripes of paint" for snipers. Cott takes the example of the larva of the blotched emerald moth, which fixes a screen of fragments of leaves to its specially hooked bristles, to argue that military camouflage uses the same method, pointing out that the "device is ... essentially the same as one widely practised during the Great War for the concealment, not of caterpillars, but of caterpillar-tractors, [gun] battery positions, observation posts and so forth."
Transparency.
Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of ; better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters.
Some tissues such as muscles can be made transparent, provided either they are very thin or organised as regular layers or fibrils that are small compared to the wavelength of visible light. A familiar example is the transparency of the lens of the vertebrate eye, which is made of the protein crystallin, and the vertebrate cornea which is made of the protein collagen. Other structures cannot be made transparent, notably the retinas or equivalent light-absorbing structures of eyes – they must absorb light to be able to function. The camera-type eye of vertebrates and cephalopods must be completely opaque. Finally, some structures are visible for a reason, such as to lure prey. For example, the nematocysts (stinging cells) of the transparent siphonophore "Agalma okenii" resemble small copepods. Examples of transparent marine animals include a wide variety of larvae, including radiata (coelenterates), siphonophores, salps (floating tunicates), gastropod molluscs, polychaete worms, many shrimplike crustaceans, and fish; whereas the adults of most of these are opaque and pigmented, resembling the seabed or shores where they live. Adult comb jellies and jellyfish obey the rule, often being mainly transparent. Cott suggests this follows the more general rule that animals resemble their background: in a transparent medium like seawater, that means being transparent. The small Amazon River fish "Microphilypnus amazonicus" and the shrimps it associates with, "Pseudopalaemon gouldingi", are so transparent as to be "almost invisible"; further, these species appear to select whether to be transparent or more conventionally mottled (disruptively patterned) according to the local background in the environment.
Silvering.
Where transparency cannot be achieved, it can be imitated effectively by silvering to make an animal's body highly reflective. At medium depths at sea, light comes from above, so a mirror oriented vertically makes animals such as fish invisible from the side. Most fish in the upper ocean such as sardine and herring are camouflaged by silvering.
The marine hatchetfish is extremely flattened laterally, leaving the body just millimetres thick, and the body is so silvery as to resemble aluminium foil. The mirrors consist of microscopic structures similar to those used to provide structural coloration: stacks of between 5 and 10 crystals of guanine spaced about of a wavelength apart to interfere constructively and achieve nearly 100 per cent reflection. In the deep waters that the hatchetfish lives in, only blue light with a wavelength of 500 nanometres percolates down and needs to be reflected, so mirrors 125 nanometres apart provide good camouflage.
In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multilayer mirrors made of protein rather than guanine.
Counter-illumination.
Counter-illumination means producing light to match a background that is brighter than an animal's body or military vehicle; it is a form of active camouflage. It is notably used by some species of squid, such as the firefly squid and the midwater squid. The latter has light-producing organs (photophores) scattered all over its underside; these create a sparkling glow that prevents the animal from appearing as a dark shape when seen from below. Counterillumination camouflage is the likely function of the bioluminescence of many marine organisms, though light is also produced to attract or to detect prey and for signalling.
Counterillumination has rarely been used for military purposes. "Diffused lighting camouflage" was trialled by Canada's National Research Council during the Second World War. It involved projecting light on to the sides of ships to match the faint glow of the night sky, requiring awkward external platforms to support the lamps. The Canadian concept was refined in the American Yehudi lights project, and trialled in aircraft including B-24 Liberators and naval Avengers. The planes were fitted with forward-pointing lamps automatically adjusted to match the brightness of the night sky. This enabled them to approach much closer to a target – within – before being seen. Counterillumination was made obsolete by radar, and neither diffused lighting camouflage nor Yehudi lights entered active service.
Ultra-blackness.
Some deep sea fishes have very black skin, reflecting under 0.5% of ambient light. This can prevent detection by predators or prey fish which use bioluminescence for illumination. "Oneirodes" had a particularly black skin which reflected only 0.044% of 480 nm wavelength light. The ultra-blackness is achieved with a thin but continuous layer of particles in the dermis, melanosomes. These particles both absorb most of the light, and are sized and shaped so as to scatter rather than reflect most of the rest. Modelling suggests that this camouflage should reduce the distance at which such a fish can be seen by a factor of 6 compared to a fish with a nominal 2% reflectance. Species with this adaptation are widely dispersed in various orders of the phylogenetic tree of bony fishes (Actinopterygii), implying that natural selection has driven the convergent evolution of ultra-blackness camouflage independently many times.
Applications.
Military.
Before 1800.
Ship camouflage was occasionally used in ancient times. Philostratus () wrote in his "Imagines" that Mediterranean pirate ships could be painted blue-gray for concealment. Vegetius () says that "Venetian blue" (sea green) was used in the Gallic Wars, when Julius Caesar sent his "speculatoria navigia" (reconnaissance boats) to gather intelligence along the coast of Britain; the ships were painted entirely in bluish-green wax, with sails, ropes and crew the same colour. There is little evidence of military use of camouflage on land before 1800, but two unusual ceramics show men in Peru's Mochica culture from before 500 AD, hunting birds with blowpipes which are fitted with a kind of shield near the mouth, perhaps to conceal the hunters' hands and faces. Another early source is a 15th-century French manuscript, "The Hunting Book of Gaston Phebus", showing a horse pulling a cart which contains a hunter armed with a crossbow under a cover of branches, perhaps serving as a hide for shooting game. Jamaican Maroons are said to have used plant materials as camouflage in the First Maroon War ().
19th-century origins.
The development of military camouflage was driven by the increasing range and accuracy of infantry firearms in the 19th century. In particular the replacement of the inaccurate musket with weapons such as the Baker rifle made personal concealment in battle essential. Two Napoleonic War skirmishing units of the British Army, the 95th Rifle Regiment and the 60th Rifle Regiment, were the first to adopt camouflage in the form of a rifle green jacket, while the Line regiments continued to wear scarlet tunics. A contemporary study in 1800 by the English artist and soldier Charles Hamilton Smith provided evidence that grey uniforms were less visible than green ones at a range of 150 yards.
In the American Civil War, rifle units such as the 1st United States Sharp Shooters (in the Federal army) similarly wore green jackets while other units wore more conspicuous colours. The first British Army unit to adopt khaki uniforms was the Corps of Guides at Peshawar, when Sir Harry Lumsden and his second in command, William Hodson introduced a "drab" uniform in 1848. Hodson wrote that it would be more appropriate for the hot climate, and help make his troops "invisible in a land of dust". Later they improvised by dyeing cloth locally. Other regiments in India soon adopted the khaki uniform, and by 1896 khaki drill uniform was used everywhere outside Europe; by the Second Boer War six years later it was used throughout the British Army.
During the late 19th century camouflage was applied to British coastal fortifications. The fortifications around Plymouth, England were painted in the late 1880s in "irregular patches of red, brown, yellow and green." From 1891 onwards British coastal artillery was permitted to be painted in suitable colours "to harmonise with the surroundings" and by 1904 it was standard practice that artillery and mountings should be painted with "large irregular patches of different colours selected to suit local conditions."
First World War.
In the First World War, the French army formed a camouflage corps, led by Lucien-Victor Guirand de Scévola, employing artists known as "camoufleurs" to create schemes such as tree observation posts and covers for guns. Other armies soon followed them. The term "camouflage" probably comes from "camoufler", a Parisian slang term meaning "to disguise", and may have been influenced by "camouflet", a French term meaning "smoke blown in someone's face". The English zoologist John Graham Kerr, artist Solomon J. Solomon and the American artist Abbott Thayer led attempts to introduce scientific principles of countershading and disruptive patterning into military camouflage, with limited success. In early 1916 the Royal Naval Air Service began to create dummy air fields to draw the attention of enemy planes to empty land. They created decoy homes and lined fake runways with flares, which were meant to help protect real towns from night raids. This strategy was not common practice and did not succeed at first, but in 1918 it caught the Germans off guard multiple times.
Ship camouflage was introduced in the early 20th century as the range of naval guns increased, with ships painted grey all over. In April 1917, when German U-boats were sinking many British ships with torpedoes, the marine artist Norman Wilkinson devised dazzle camouflage, which paradoxically made ships more visible but harder to target. In Wilkinson's own words, dazzle was designed "not for low visibility, but in such a way as to break up her form and thus confuse a submarine officer as to the course on which she was heading".
Second World War.
In the Second World War, the zoologist Hugh Cott, a protégé of Kerr, worked to persuade the British army to use more effective camouflage methods, including countershading, but, like Kerr and Thayer in the First World War, with limited success. For example, he painted two rail-mounted coastal guns, one in conventional style, one countershaded. In aerial photographs, the countershaded gun was essentially invisible. The power of aerial observation and attack led every warring nation to camouflage targets of all types. The Soviet Union's Red Army created the comprehensive doctrine of "Maskirovka" for military deception, including the use of camouflage. For example, during the Battle of Kursk, General Katukov, the commander of the Soviet 1st Tank Army, remarked that the enemy "did not suspect that our well-camouflaged tanks were waiting for him. As we later learned from prisoners, we had managed to move our tanks forward unnoticed". The tanks were concealed in previously prepared defensive emplacements, with only their turrets above ground level. In the air, Second World War fighters were often painted in ground colours above and sky colours below, attempting two different camouflage schemes for observers above and below. Bombers and night fighters were often black, while maritime reconnaissance planes were usually white, to avoid appearing as dark shapes against the sky. For ships, dazzle camouflage was mainly replaced with plain grey in the Second World War, though experimentation with colour schemes continued.
As in the First World War, artists were pressed into service; for example, the surrealist painter Roland Penrose became a lecturer at the newly founded Camouflage Development and Training Centre at Farnham Castle, writing the practical "Home Guard Manual of Camouflage". The film-maker Geoffrey Barkas ran the Middle East Command Camouflage Directorate during the 1941–1942 war in the Western Desert, including the successful deception of Operation Bertram. Hugh Cott was chief instructor; the artist camouflage officers, who called themselves "camoufleurs", included Steven Sykes and Tony Ayrton. In Australia, artists were also prominent in the Sydney Camouflage Group, formed under the chairmanship of Professor William John Dakin, a zoologist from Sydney University. Max Dupain, Sydney Ure Smith, and William Dobell were among the members of the group, which worked at Bankstown Airport, RAAF Base Richmond and Garden Island Dockyard. In the United States, artists like John Vassos took a certificate course in military and industrial camouflage at the American School of Design with Baron Nicholas Cerkasoff, and went on to create camouflage for the Air Force.
After 1945.
Camouflage has been used to protect military equipment such as vehicles, guns, ships, aircraft and buildings as well as individual soldiers and their positions.
Vehicle camouflage methods begin with paint, which offers at best only limited effectiveness. Other methods for stationary land vehicles include covering with improvised materials such as blankets and vegetation, and erecting nets, screens and soft covers which may suitably reflect, scatter or absorb near infrared and radar waves. Some military textiles and vehicle camouflage paints also reflect infrared to help provide concealment from night vision devices.
After the Second World War, radar made camouflage generally less effective, though coastal boats are sometimes painted like land vehicles. Aircraft camouflage too came to be seen as less important because of radar, and aircraft of different air forces, such as the Royal Air Force's Lightning, were often uncamouflaged.
Many camouflaged textile patterns have been developed to suit the need to match combat clothing to different kinds of terrain (such as woodland, snow, and desert). The design of a pattern effective in all terrains has proved elusive. The American Universal Camouflage Pattern of 2004 attempted to suit all environments, but was withdrawn after a few years of service. Terrain-specific patterns have sometimes been developed but are ineffective in other terrains. The problem of making a pattern that works at different ranges has been solved with multiscale designs, often with a pixellated appearance and designed digitally, that provide a fractal-like range of patch sizes so they appear disruptively coloured both at close range and at a distance. The first genuinely digital camouflage pattern was the Canadian Disruptive Pattern (CADPAT), issued to the army in 2002, soon followed by the American Marine pattern (MARPAT). A pixellated appearance is not essential for this effect, though it is simpler to design and to print.
Hunting.
Hunters of game have long made use of camouflage in the form of materials such as animal skins, mud, foliage, and green or brown clothing to enable them to approach wary game animals. Field sports such as driven grouse shooting conceal hunters in hides (also called blinds or shooting butts). Modern hunting clothing makes use of fabrics that provide a disruptive camouflage pattern; for example, in 1986 the hunter Bill Jordan created cryptic clothing for hunters, printed with images of specific kinds of vegetation such as grass and branches.
Civil structures.
Camouflage is occasionally used to make built structures less conspicuous: for example, in South Africa, towers carrying cell telephone antennae are sometimes camouflaged as tall trees with plastic branches, in response to "resistance from the community". Since this method is costly (a figure of three times the normal cost is mentioned), alternative forms of camouflage can include using neutral colours or familiar shapes such as cylinders and flagpoles. Conspicuousness can also be reduced by siting masts near, or on, other structures.
Automotive manufacturers often use patterns to disguise upcoming products. This camouflage is designed to obfuscate the vehicle's visual lines, and is used along with padding, covers, and decals. The patterns' purpose is to prevent visual observation (and to a lesser degree photography), that would subsequently enable reproduction of the vehicle's form factors.
Fashion, art and society.
Military camouflage patterns influenced fashion and art from the time of the First World War onwards. Gertrude Stein recalled the cubist artist Pablo Picasso's reaction in around 1915:
In 1919, the attendants of a "dazzle ball", hosted by the Chelsea Arts Club, wore dazzle-patterned black and white clothing. The ball influenced fashion and art via postcards and magazine articles. The "Illustrated London News" announced:
More recently, fashion designers have often used camouflage fabric for its striking designs, its "patterned disorder" and its symbolism. Camouflage clothing can be worn largely for its symbolic significance rather than for fashion, as when, during the late 1960s and early 1970s in the United States, anti-war protestors often ironically wore military clothing during demonstrations against the American involvement in the Vietnam War.
Modern artists such as Ian Hamilton Finlay have used camouflage to reflect on war. His 1973 screenprint of a tank camouflaged in a leaf pattern, "Arcadia", is described by the Tate as drawing "an ironic parallel between this idea of a natural paradise and the camouflage patterns on a tank". The title refers to the Utopian Arcadia of poetry and art, and the "memento mori" Latin phrase "Et in Arcadia ego" which recurs in Hamilton Finlay's work. In science fiction, "Camouflage" is a novel about shapeshifting alien beings by Joe Haldeman. The word is used more figuratively in works of literature such as Thaisa Frank's collection of stories of love and loss, "A Brief History of Camouflage".
In 1986, Andy Warhol began a series of monumental camouflage paintings, which helped to transform camouflage into a popular print pattern. A year later, in 1987, New York designer Stephen Sprouse used Warhol's camouflage prints as the basis for his Autumn Winter 1987 collection.
|
6449
|
7903804
|
https://en.wikipedia.org/wiki?curid=6449
|
Clock
|
A clock or chronometer is a device that measures and displays time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia.
Some predecessors to the modern clock may be considered "clocks" that are based on movement in nature: A sundial shows the time by displaying the position of a shadow on a flat surface. There is a range of duration timers, a well-known example being the hourglass. Water clocks, along with sundials, are possibly the oldest time-measuring instruments. A major advance occurred with the invention of the verge escapement, which made possible the first mechanical clocks around 1300 in Europe, which kept time with oscillating timekeepers like balance wheels.
Traditionally, in horology (the study of timekeeping), the term "clock" was used for a striking clock, while a clock that did not strike the hours audibly was called a timepiece. This distinction is not generally made any longer. Watches and other timepieces that can be carried on one's person are usually not referred to as clocks. Spring-driven clocks appeared during the 15th century. During the 15th and 16th centuries, clockmaking flourished. The next development in accuracy occurred after 1656 with the invention of the pendulum clock by Christiaan Huygens. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The mechanism of a timepiece with a series of gears driven by a spring or weights is referred to as clockwork; the term is used by extension for a similar mechanism not used in a timepiece. The electric clock was patented in 1840, and electronic clocks were introduced in the 20th century, becoming widespread with the development of small battery-powered semiconductor devices.
The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates at a particular frequency.
This object can be a pendulum, a balance wheel, a tuning fork, a quartz crystal, or the vibration of electrons in atoms as they emit microwaves, the last of which is so precise that it serves as the formal definition of the second.
Clocks have different ways of displaying the time. Analog clocks indicate time with a traditional clock face and moving hands. Digital clocks display a numeric representation of time. Two numbering systems are in use: 12-hour time notation and 24-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays. For the blind and for use over telephones, speaking clocks state the time audibly in words. There are also clocks for the blind that have displays that can be read by touch.
Etymology.
The word "clock" derives from the medieval Latin word for 'bell'——and has cognates in many European languages. Clocks spread to England from the Low Countries, so the English word came from the Middle Low German and Middle Dutch .
The word is also derived from the Middle English , Old North French , or Middle Dutch , all of which mean 'bell'.
History of time-measuring devices.
Sundials.
The apparent position of the Sun in the sky changes over the course of each day, reflecting the rotation of the Earth. Shadows cast by stationary objects move correspondingly, so their positions can be used to indicate the time of day. A sundial shows the time by displaying the position of a shadow on a (usually) flat surface that has markings that correspond to the hours. Sundials can be horizontal, vertical, or in other orientations. Sundials were widely used in ancient times. With knowledge of latitude, a well-constructed sundial can measure local solar time with reasonable accuracy, within a minute or two. Sundials continued to be used to monitor the performance of clocks until the 1830s, when the use of the telegraph and trains standardized time and time zones between cities.
Devices that measure duration, elapsed time and intervals.
Many devices can be used to mark the passage of time without respect to reference time (time of day, hours, minutes, etc.) and can be useful for measuring duration or intervals. Examples of such duration timers are candle clocks, incense clocks, and the hourglass. Both the candle clock and the incense clock work on the same principle, wherein the consumption of resources is more or less constant, allowing reasonably precise and repeatable estimates of time passages. In the hourglass, fine sand pouring through a tiny hole at a constant rate indicates an arbitrary, predetermined passage of time. The resource is not consumed, but re-used.
Water clocks.
Water clocks, along with sundials, are possibly the oldest time-measuring instruments, with the only exception being the day-counting tally stick. Given their great antiquity, where and when they first existed is not known and is perhaps unknowable. The bowl-shaped outflow is the simplest form of a water clock and is known to have existed in Babylon and Egypt around the 16th century BC. Other regions of the world, including India and China, also have early evidence of water clocks, but the earliest dates are less certain. Some authors, however, write about water clocks appearing as early as 4000 BC in these regions of the world.
The Macedonian astronomer Andronicus of Cyrrhus supervised the construction of the Tower of the Winds in Athens in the 1st century BC, which housed a large clepsydra inside as well as multiple prominent sundials outside, allowing it to function as a kind of early clocktower. The Greek and Roman civilizations advanced water clock design with improved accuracy. These advances were passed on through Byzantine and Islamic times, eventually making their way back to Europe. Independently, the Chinese developed their own advanced water clocks () by 725 AD, passing their ideas on to Korea and Japan.
Some water clock designs were developed independently, and some knowledge was transferred through the spread of trade. Pre-modern societies do not have the same precise timekeeping requirements that exist in modern industrial societies, where every hour of work or rest is monitored and work may start or finish at any time regardless of external conditions. Instead, water clocks in ancient societies were used mainly for astrological reasons. These early water clocks were calibrated with a sundial. While never reaching the level of accuracy of a modern timepiece, the water clock was the most accurate and commonly used timekeeping device for millennia until it was replaced by the more accurate pendulum clock in 17th-century Europe.
Islamic civilization is credited with further advancing the accuracy of clocks through elaborate engineering. In 797 (or possibly 801), the Abbasid caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas together with a "particularly elaborate example" of a water clock. Pope Sylvester II introduced clocks to northern and western Europe around 1000 AD.
Mechanical water clocks.
The first known geared clock was invented by the great mathematician, physicist, and engineer Archimedes during the 3rd century BC. Archimedes created his astronomical clock, which was also a cuckoo clock with birds singing and moving every hour. It is the first carillon clock as it plays music simultaneously with a person blinking his eyes, surprised by the singing birds. The Archimedes clock works with a system of four weights, counterweights, and strings regulated by a system of floats in a water container with siphons that regulate the automatic continuation of the clock. The principles of this type of clock are described by the mathematician and physicist Hero, who says that some of them work with a chain that turns a gear in the mechanism. Another Greek clock probably constructed at the time of Alexander was in Gaza, as described by Procopius. The Gaza clock was probably a Meteoroskopeion, i.e., a building showing celestial phenomena and the time. It had a pointer for the time and some automations similar to the Archimedes clock. There were 12 doors opening one every hour, with Hercules performing his labors, the Lion at one o'clock, etc., and at night a lamp becomes visible every hour, with 12 windows opening to show the time.
The Tang dynasty Buddhist monk Yi Xing along with government official Liang Lingzan made the escapement in 723 (or 725) to the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. The Song dynasty polymath and genius Su Song (1020–1101) incorporated it into his monumental innovation of the astronomical clock tower of Kaifeng in 1088. His astronomical clock and rotating armillary sphere still relied on the use of either flowing water during the spring, summer, and autumn seasons or liquid mercury during the freezing temperatures of winter (i.e., hydraulics).
In Su Song's waterwheel linkwork device, the action of the escapement's arrest and release was achieved by gravity exerted periodically as the continuous flow of liquid-filled containers of a limited size. In a single line of evolution, Su Song's clock therefore united the concepts of the clepsydra and the mechanical clock into one device run by mechanics and hydraulics. In his memorial, Su Song wrote about this concept:
According to your servant's opinion there have been many systems and designs for astronomical instruments during past dynasties all differing from one another in minor respects. But the principle of the use of water-power for the driving mechanism has always been the same. The heavens move without ceasing but so also does water flow (and fall). Thus if the water is made to pour with perfect evenness, then the comparison of the rotary movements (of the heavens and the machine) will show no discrepancy or contradiction; for the unresting follows the unceasing.
Song was also strongly influenced by the earlier armillary sphere created by Zhang Sixun (976 AD), who also employed the escapement mechanism and used liquid mercury instead of water in the waterwheel of his astronomical clock tower. The mechanical clockworks for Su Song's astronomical tower featured a great driving-wheel that was 11 feet in diameter, carrying 36 scoops, into each of which water was poured at a uniform rate from the "constant-level tank". The main driving shaft of iron, with its cylindrical necks supported on iron crescent-shaped bearings, ended in a pinion, which engaged a gear wheel at the lower end of the main vertical transmission shaft. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet), featured a clock escapement, and was indirectly powered by a rotating wheel either with falling water or liquid mercury. A full-sized working replica of Su Song's clock exists in the Republic of China (Taiwan)'s National Museum of Natural Science, Taichung city. This full-scale, fully functional replica, approximately 12 meters (39 feet) in height, was constructed from Su Song's original descriptions and mechanical drawings. The Chinese escapement spread west and was the source for Western escapement technology.
In the 12th century, Al-Jazari, an engineer from Mesopotamia (lived 1136–1206) who worked for the Artuqid king of Diyar-Bakr, Nasir al-Din, made numerous clocks of all shapes and sizes. The most reputed clocks included the elephant, scribe, and castle clocks, some of which have been successfully reconstructed. As well as telling the time, these grand clocks were symbols of the status, grandeur, and wealth of the Urtuq State. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts.
Fully mechanical.
The word (from the Greek —'hour', and —'to tell') was used to describe early mechanical clocks, but the use of this word (still used in several Romance languages) for all timekeepers conceals the true nature of the mechanisms. For example, there is a record that in 1176, Sens Cathedral in France installed an 'horologe', but the mechanism used is unknown. According to Jocelyn de Brakelond, in 1198, during a fire at the abbey of St Edmundsbury (now Bury St Edmunds), the monks "ran to the clock" to fetch water, indicating that their water clock had a reservoir large enough to help extinguish the occasional fire. The word "clock" (via Medieval Latin from Old Irish , both meaning 'bell'), which gradually supersedes "horologe", suggests that it was the sound of bells that also characterized the prototype mechanical clocks that appeared during the 13th century in Europe.
In Europe, between 1280 and 1320, there was an increase in the number of references to clocks and horologes in church records, and this probably indicates that a new type of clock mechanism had been devised. Existing clock mechanisms that used water power were being adapted to take their driving power from falling weights. This power was controlled by some form of oscillating mechanism, probably derived from existing bell-ringing or alarm devices. This controlled release of power – the escapement – marks the beginning of the true mechanical clock, which differed from the previously mentioned cogwheel clocks. The verge escapement mechanism appeared during the surge of true mechanical clock development, which did not need any kind of fluid power, like water or mercury, to work.
These mechanical clocks were intended for two main purposes: for signalling and notification (e.g., the timing of services and public events) and for modeling the Solar System. The former purpose is administrative; the latter arises naturally given the scholarly interests in astronomy, science, and astrology and how these subjects integrated with the religious philosophy of the time. The astrolabe was used both by astronomers and astrologers, and it was natural to apply a clockwork drive to the rotating plate to produce a working model of the solar system.
Simple clocks intended mainly for notification were installed in towers and did not always require faces or hands. They would have announced the canonical hours or intervals between set times of prayer. Canonical hours varied in length as the times of sunrise and sunset shifted. The more sophisticated astronomical clocks would have had moving dials or hands and would have shown the time in various time systems, including Italian hours, canonical hours, and time as measured by astronomers at the time. Both styles of clocks started acquiring extravagant features, such as automata.
In 1283, a large clock was installed at Dunstable Priory in Bedfordshire in southern England; its location above the rood screen suggests that it was not a water clock. In 1292, Canterbury Cathedral installed a 'great horloge'. Over the next 30 years, there were mentions of clocks at a number of ecclesiastical institutions in England, Italy, and France. In 1322, a new clock was installed in Norwich, an expensive replacement for an earlier clock installed in 1273. This had a large (2 metre) astronomical dial with automata and bells. The costs of the installation included the full-time employment of two clockkeepers for two years.
Astronomical.
An elaborate water clock, the 'Cosmic Engine', was invented by Su Song, a Chinese polymath, designed and constructed in China in 1092. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet) and was indirectly powered by a rotating wheel with falling water and liquid mercury, which turned an armillary sphere capable of calculating complex astronomical problems.
In Europe, there were the clocks constructed by Richard of Wallingford in Albans by 1336, and by Giovanni de Dondi in Padua from 1348 to 1364. They no longer exist, but detailed descriptions of their design and construction survive, and modern reproductions have been made. They illustrate how quickly the theory of the mechanical clock had been translated into practical constructions, and also that one of the many impulses to their development had been the desire of astronomers to investigate celestial phenomena.
The Astrarium of Giovanni Dondi dell'Orologio was a complex astronomical clock built between 1348 and 1364 in Padua, Italy, by the doctor and clock-maker Giovanni Dondi dell'Orologio. The Astrarium had seven faces and 107 moving gears; it showed the positions of the Sun, the Moon and the five planets then known, as well as religious feast days. The astrarium stood about 1 metre high, and consisted of a seven-sided brass or iron framework resting on 7 decorative paw-shaped feet. The lower section provided a 24-hour dial and a large calendar drum, showing the fixed feasts of the church, the movable feasts, and the position in the zodiac of the Moon's ascending node. The upper section contained 7 dials, each about 30 cm in diameter, showing the positional data for the Primum Mobile, Venus, Mercury, the Moon, Saturn, Jupiter, and Mars. Directly above the 24-hour dial is the dial of the Primum Mobile, so called because it reproduces the diurnal motion of the stars and the annual motion of the Sun against the background of stars. Each of the 'planetary' dials used complex clockwork to produce reasonably accurate models of the planets' motion. These agreed reasonably well both with Ptolemaic theory and with observations.
Wallingford's clock had a large astrolabe-type dial, showing the Sun, the Moon's age, phase, and node, a star map, and possibly the planets. In addition, it had a wheel of fortune and an indicator of the state of the tide at London Bridge. Bells rang every hour, the number of strokes indicating the time. Dondi's clock was a seven-sided construction, 1 metre high, with dials showing the time of day, including minutes, the motions of all the known planets, an automatic calendar of fixed and movable feasts, and an eclipse prediction hand rotating once every 18 years. It is not known how accurate or reliable these clocks would have been. They were probably adjusted manually every day to compensate for errors caused by wear and imprecise manufacture. Water clocks are sometimes still used, and can be examined in places such as ancient castles and museums. The Salisbury Cathedral clock, built in 1386, is considered to be the world's oldest surviving mechanical clock that strikes the hours.
Spring-driven.
Clockmakers developed their art in various ways. Building smaller clocks was a technical challenge, as was improving accuracy and reliability. Clocks could be impressive showpieces to demonstrate skilled craftsmanship, or less expensive, mass-produced items for domestic use. The escapement in particular was an important factor affecting the clock's accuracy, so many different mechanisms were tried.
Spring-driven clocks appeared during the 15th century, although they are often erroneously credited to Nuremberg watchmaker Peter Henlein (or Henle, or Hele) around 1511. The earliest existing spring driven clock is the chamber clock given to Phillip the Good, Duke of Burgundy, around 1430, now in the Germanisches Nationalmuseum. Spring power presented clockmakers with a new problem: how to keep the clock movement running at a constant rate as the spring ran down. This resulted in the invention of the "stackfreed" and the fusee in the 15th century, and many other innovations, down to the invention of the modern "going barrel" in 1760.
Early clock dials did not indicate minutes and seconds. A clock with a dial indicating minutes was illustrated in a 1475 manuscript by Paulus Almanus, and some 15th-century clocks in Germany indicated minutes and seconds.
An early record of a seconds hand on a clock dates back to about 1560 on a clock now in the Fremersdorf collection.
During the 15th and 16th centuries, clockmaking flourished, particularly in the metalworking towns of Nuremberg and Augsburg, and in Blois, France. Some of the more basic table clocks have only one time-keeping hand, with the dial between the hour markers being divided into four equal parts making the clocks readable to the nearest 15 minutes. Other clocks were exhibitions of craftsmanship and skill, incorporating astronomical indicators and musical movements. The cross-beat escapement was invented in 1584 by Jost Bürgi, who also developed the remontoire. Bürgi's clocks were a great improvement in accuracy as they were correct to within a minute a day. These clocks helped the 16th-century astronomer Tycho Brahe to observe astronomical events with much greater precision than before.
Pendulum.
The next development in accuracy occurred after 1656 with the invention of the pendulum clock. Galileo had the idea to use a swinging bob to regulate the motion of a time-telling device earlier in the 17th century. Christiaan Huygens, however, is usually credited as the inventor. He determined the mathematical formula that related pendulum length to time (about 99.4 cm or 39.1 inches for the one second movement) and had the first pendulum-driven clock made. The first model clock was built in 1657 in the Hague, but it was in England that the idea was taken up. The longcase clock (also known as the "grandfather clock") was created to house the pendulum and works by the English clockmaker William Clement in 1670 or 1671. It was also at this time that clock cases began to be made of wood and clock faces to use enamel as well as hand-painted ceramics.
In 1670, William Clement created the anchor escapement, an improvement over Huygens' crown escapement. Clement also introduced the pendulum suspension spring in 1671. The concentric minute hand was added to the clock by Daniel Quare, a London clockmaker and others, and the second hand was first introduced.
Hairspring.
In 1675, Huygens and Robert Hooke invented the spiral balance spring, or the hairspring, designed to control the oscillating speed of the balance wheel. This crucial advance finally made accurate pocket watches possible. The great English clockmaker Thomas Tompion, was one of the first to use this mechanism successfully in his pocket watches, and he adopted the minute hand which, after a variety of designs were trialled, eventually stabilised into the modern-day configuration. The rack and snail striking mechanism for striking clocks, was introduced during the 17th century and had distinct advantages over the 'countwheel' (or 'locking plate') mechanism. During the 20th century there was a common misconception that Edward Barlow invented "rack and snail" striking. In fact, his invention was connected with a repeating mechanism employing the rack and snail. The repeating clock, that chimes the number of hours (or even minutes) on demand was invented by either Quare or Barlow in 1676. George Graham invented the deadbeat escapement for clocks in 1720.
Marine chronometer.
A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about 10 seconds per day. This clock could not contain a pendulum, which would be virtually useless on a rocking ship. In 1714, the British government offered large financial rewards to the value of 20,000 pounds for anyone who could determine longitude accurately. John Harrison, who dedicated his life to improving the accuracy of his clocks, later received considerable sums under the Longitude Act.
In 1735, Harrison built his first chronometer, which he steadily improved on over the next thirty years before submitting it for examination. The clock had many innovations, including the use of bearings to reduce friction, weighted balances to compensate for the ship's pitch and roll in the sea and the use of two different metals to reduce the problem of expansion from heat. The chronometer was tested in 1761 by Harrison's son and by the end of 10 weeks the clock was in error by less than 5 seconds.
Mass production.
The British had dominated watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high quality products for the elite. Although there was an attempt to modernise clock manufacture with mass-production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. In 1816, Eli Terry and some other Connecticut clockmakers developed a way of mass-producing clocks by using interchangeable parts. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that also used interchangeable parts, and by 1861 was running a successful enterprise incorporated as the Waltham Watch Company.
Early electric.
In 1815, the English scientist Francis Ronalds published the first electric clock powered by dry pile batteries. Alexander Bain, a Scottish clockmaker, patented the electric clock in 1840. The electric clock's mainspring is wound either with an electric motor or with an electromagnet and armature. In 1841, he first patented the electromagnetic pendulum. By the end of the nineteenth century, the advent of the dry cell battery made it feasible to use electric power in clocks. Spring or weight-driven clocks that use electricity, either alternating current (AC) or direct current (DC), to rewind the spring or raise the weight of a mechanical clock would be classified as an electromechanical clock. This classification would also apply to clocks that employ an electrical impulse to propel the pendulum. In electromechanical clocks, electricity serves no time-keeping function. These types of clocks were made as individual timepieces but are more commonly used in synchronized time installations in schools, businesses, factories, railroads and government facilities as a master clock and slave clocks.
Where an AC electrical supply of stable frequency is available, timekeeping can be maintained very reliably by using a synchronous motor, essentially counting the cycles. The supply current alternates with an accurate frequency of 50 hertz in many countries, and 60 hertz in others. While the frequency may vary slightly during the day as the load changes, generators are designed to maintain an accurate number of cycles over a day, so the clock may be a fraction of a second slow or fast at any time, but will be perfectly accurate over a long time. The rotor of the motor rotates at a speed that is related to the alternation frequency. Appropriate gearing converts this rotation speed to the correct ones for the hands of the analog clock. Time in these cases is measured in several ways, such as by counting the cycles of the AC supply, vibration of a tuning fork, the behaviour of quartz crystals, or the quantum vibrations of atoms. Electronic circuits divide these high-frequency oscillations into slower ones that drive the time display.
Quartz.
The piezoelectric properties of crystalline quartz were discovered by Jacques and Pierre Curie in 1880. The first crystal oscillator was invented in 1917 by Alexander M. Nicholson, after which the first quartz crystal oscillator was built by Walter G. Cady in 1921. In 1927 the first quartz clock was built by Warren Marrison and J.W. Horton at Bell Telephone Laboratories in Canada. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes at the time, limited their practical use elsewhere. The National Bureau of Standards (now NIST) based the time standard of the United States on quartz clocks from late 1929 until the 1960s, when it changed to atomic clocks. In 1969, Seiko produced the world's first quartz wristwatch, the Astron. Their inherent accuracy and low cost of production resulted in the subsequent proliferation of quartz clocks and watches.
Atomic.
Currently, atomic clocks are the most accurate clocks in existence. They are considerably more accurate than quartz clocks as they can be accurate to within a few seconds over trillions of years. Atomic clocks were first theorized by Lord Kelvin in 1879. In the 1930s the development of magnetic resonance created practical method for doing this. A prototype ammonia maser device was built in 1949 at the U.S. National Bureau of Standards (NBS, now NIST). Although it was less accurate than existing quartz clocks, it served to demonstrate the concept. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale "ephemeris time" (ET). As of 2013, the most stable atomic clocks are ytterbium clocks, which are stable to within less than two parts in 1 quintillion ().
Operation.
The invention of the mechanical clock in the 13th century initiated a change in timekeeping methods from continuous processes, such as the motion of the gnomon's shadow on a sundial or the flow of liquid in a water clock, to periodic oscillatory processes, such as the swing of a pendulum or the vibration of a quartz crystal, which had the potential for more accuracy. All modern clocks use oscillation.
Although the mechanisms they use vary, all oscillating clocks, mechanical, electric, and atomic, work similarly and can be divided into analogous parts. They consist of an object that repeats the same motion over and over again, an "oscillator", with a precisely constant time interval between each repetition, or 'beat'. Attached to the oscillator is a "controller" device, which sustains the oscillator's motion by replacing the energy it loses to friction, and converts its oscillations into a series of pulses. The pulses are then counted by some type of "counter", and the number of counts is converted into convenient units, usually seconds, minutes, hours, etc. Finally some kind of "indicator" displays the result in human readable form.
Oscillator.
The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates repetitively at a precisely constant frequency.
The advantage of a harmonic oscillator over other forms of oscillator is that it employs resonance to vibrate at a precise natural resonant frequency or "beat" dependent only on its physical characteristics, and resists vibrating at other rates. The possible precision achievable by a harmonic oscillator is measured by a parameter called its Q, or quality factor, which increases (other things being equal) with its resonant frequency. This is why there has been a long-term trend toward higher frequency oscillators in clocks. Balance wheels and pendulums always include a means of adjusting the rate of the timepiece. Quartz timepieces sometimes include a rate screw that adjusts a capacitor for that purpose. Atomic clocks are primary standards, and their rate cannot be adjusted.
Synchronized or slave clocks.
Some clocks rely for their accuracy on an external oscillator; that is, they are automatically synchronized to a more accurate clock:
Controller.
This has the dual function of keeping the oscillator running by giving it 'pushes' to replace the energy lost to friction, and converting its vibrations into a series of pulses that serve to measure the time.
In mechanical clocks, the low Q of the balance wheel or pendulum oscillator made them very sensitive to the disturbing effect of the impulses of the escapement, so the escapement had a great effect on the accuracy of the clock, and many escapement designs were tried. The higher Q of resonators in electronic clocks makes them relatively insensitive to the disturbing effects of the drive power, so the driving oscillator circuit is a much less critical component.
Counter chain.
This counts the pulses and adds them up to get traditional time units of seconds, minutes, hours, etc. It usually has a provision for "setting" the clock by manually entering the correct time into the counter.
Indicator.
This displays the count of seconds, minutes, hours, etc. in a human readable form.
Types.
Clocks can be classified by the type of time display, as well as by the method of timekeeping.
Time display methods.
Analog.
Analog clocks usually use a clock face which indicates time using rotating pointers called "hands" on a fixed numbered dial or dials. The standard clock face, known universally throughout the world, has a short "hour hand" which indicates the hour on a circular dial of 12 hours, making two revolutions per day, and a longer "minute hand" which indicates the minutes in the current hour on the same dial, which is also divided into 60 minutes. It may also have a "second hand" which indicates the seconds in the current minute. The only other widely used clock face today is the 24 hour analog dial, because of the use of 24 hour time in military organizations and timetables. Before the modern clock face was standardized during the Industrial Revolution, many other face designs were used throughout the years, including dials divided into 6, 8, 10, and 24 hours. During the French Revolution the French government tried to introduce a 10-hour clock, as part of their decimal-based metric system of measurement, but it did not achieve widespread use. An Italian 6 hour clock was developed in the 18th century, presumably to save power (a clock or watch striking 24 times uses more power).
Another type of analog clock is the sundial, which tracks the sun continuously, registering the time by the shadow position of its gnomon. Because the sun does not adjust to daylight saving time, users must add an hour during that time. Corrections must also be made for the equation of time, and for the difference between the longitudes of the sundial and of the central meridian of the time zone that is being used (i.e. 15 degrees east of the prime meridian for each hour that the time zone is ahead of GMT). Sundials use some or part of the 24 hour analog dial. There also exist clocks which use a digital display despite having an analog mechanism—these are commonly referred to as flip clocks. Alternative systems have been proposed. For example, the "Twelv" clock indicates the current hour using one of twelve colors, and indicates the minute by showing a proportion of a circular disk, similar to a moon phase.
Digital.
Digital clocks display a numeric representation of time. Two numeric display formats are commonly used on digital clocks:
Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays; many other display technologies are used as well (cathode-ray tubes, nixie tubes, etc.). After a reset, battery change or power failure, these clocks without a backup battery or capacitor either start counting from 12:00, or stay at 12:00, often with blinking digits indicating that the time needs to be set. Some newer clocks will reset themselves based on radio or Internet time servers that are tuned to national atomic clocks. Since the introduction of digital clocks in the 1960s, there has been a notable decline in the use of analog clocks.
Some clocks, called 'flip clocks', have digital displays that work mechanically. The digits are painted on sheets of material which are mounted like the pages of a book. Once a minute, a page is turned over to reveal the next digit. These displays are usually easier to read in brightly lit conditions than LCDs or LEDs. Also, they do not go back to 12:00 after a power interruption. Flip clocks generally do not have electronic mechanisms. Usually, they are driven by AC-synchronous motors.
Hybrid (analog-digital).
Clocks with analog quadrants, with a digital component, usually minutes and hours displayed analogously and seconds displayed in digital mode.
Auditory.
For convenience, distance, telephony or blindness, auditory clocks present the time as sounds. The sound is either spoken natural language, (e.g. "The time is twelve thirty-five"), or as auditory codes (e.g. number of sequential bell rings on the hour represents the number of the hour like the bell, Big Ben). Most telecommunication companies also provide a speaking clock service as well.
Word.
Word clocks are clocks that display the time visually using sentences. E.g.: "It's about three o'clock." These clocks can be implemented in hardware or software.
Projection.
Some clocks, usually digital ones, include an optical projector that shines a magnified image of the time display onto a screen or onto a surface such as an indoor ceiling or wall. The digits are large enough to be easily read, without using glasses, by persons with moderately imperfect vision, so the clocks are convenient for use in their bedrooms. Usually, the timekeeping circuitry has a battery as a backup source for an uninterrupted power supply to keep the clock on time, while the projection light only works when the unit is connected to an A.C. supply. Completely battery-powered portable versions resembling flashlights are also available.
Tactile.
Auditory and projection clocks can be used by people who are blind or have limited vision. There are also clocks for the blind that have displays that can be read by using the sense of touch. Some of these are similar to normal analog displays, but are constructed so the hands can be felt without damaging them. Another type is essentially digital, and uses devices that use a code such as Braille to show the digits so that they can be felt with the fingertips.
Multi-display.
Some clocks have several displays driven by a single mechanism, and some others have several completely separate mechanisms in a single case. Clocks in public places often have several faces visible from different directions, so that the clock can be read from anywhere in the vicinity; all the faces show the same time. Other clocks show the current time in several time-zones. Watches that are intended to be carried by travellers often have two displays, one for the local time and the other for the time at home, which is useful for making pre-arranged phone calls. Some equation clocks have two displays, one showing mean time and the other solar time, as would be shown by a sundial. Some clocks have both analog and digital displays. Clocks with Braille displays usually also have conventional digits so they can be read by sighted people.
Purposes.
Clocks are in homes, offices and many other places; smaller ones (watches) are carried on the wrist or in a pocket; larger ones are in public places, e.g. a railway station or church. A small clock is often shown in a corner of computer displays, mobile phones and many MP3 players.
The primary purpose of a clock is to "display" the time. Clocks may also have the facility to make a loud alert signal at a specified time, typically to waken a sleeper at a preset time; they are referred to as "alarm clocks". The alarm may start at a low volume and become louder, or have the facility to be switched off for a few minutes then resume. Alarm clocks with visible indicators are sometimes used to indicate to children too young to read the time that the time for sleep has finished; they are sometimes called "training clocks".
A clock mechanism may be used to "control" a device according to time, e.g. a central heating system, a VCR, or a time bomb (see: digital counter). Such mechanisms are usually called timers. Clock mechanisms are also used to drive devices such as solar trackers and astronomical telescopes, which have to turn at accurately controlled speeds to counteract the rotation of the Earth.
Most digital computers depend on an internal signal at constant frequency to synchronize processing; this is referred to as a clock signal. (A few research projects are developing CPUs based on asynchronous circuits.) Some equipment, including computers, also maintains time and date for use as required; this is referred to as time-of-day clock, and is distinct from the system clock signal, although possibly based on counting its cycles.
Time standards.
For some scientific work timing of the utmost accuracy is essential. It is also necessary to have a standard of the maximum accuracy against which working clocks can be calibrated. An ideal clock would give the time to unlimited accuracy, but this is not realisable. Many physical processes, in particular including some transitions between atomic energy levels, occur at exceedingly stable frequency; counting cycles of such a process can give a very accurate and consistent time—clocks which work this way are usually called atomic clocks. Such clocks are typically large, very expensive, require a controlled environment, and are far more accurate than required for most purposes; they are typically used in a standards laboratory.
Navigation.
Until advances in the late twentieth century, navigation depended on the ability to measure latitude and longitude. Latitude can be determined through celestial navigation; the measurement of longitude requires accurate knowledge of time. This need was a major motivation for the development of accurate mechanical clocks. John Harrison created the first highly accurate marine chronometer in the mid-18th century. The Noon gun in Cape Town still fires an accurate signal to allow ships to check their chronometers. Many buildings near major ports used to have (some still do) a large ball mounted on a tower or mast arranged to drop at a pre-determined time, for the same purpose. While satellite navigation systems such as GPS require unprecedentedly accurate knowledge of time, this is supplied by equipment on the satellites; vehicles no longer need timekeeping equipment.
Sports and games.
Clocks can be used to measure varying periods of time in games and sports. Stopwatches can be used to time the performance of track athletes. Chess clocks are used to limit the board game players' time to make a move. In various sports, "" measure the duration the game or subdivisions of the game, while other clocks may be used for tracking different durations; these include play clocks, shot clocks, and pitch clocks.
Culture.
Folklore and superstition.
In the United Kingdom, clocks are associated with various beliefs, many involving death or bad luck. In legends, clocks have reportedly stopped of their own accord upon a nearby person's death, especially those of monarchs. The clock in the House of Lords supposedly stopped at "nearly" the hour of George III's death in 1820, the one at Balmoral Castle stopped during the hour of Queen Victoria's death, and similar legends are related about clocks associated with William IV and Elizabeth I. Many superstitions exist about clocks. One stopping before a person has died may foretell coming death. Similarly, if a clock strikes during a church hymn or a marriage ceremony, death or calamity is prefigured for the parishioners or a spouse, respectively. Death or ill events are foreshadowed if a clock strikes the wrong time. It may also be unlucky to have a clock face a fire or to speak while a clock is striking.
In Chinese culture, giving a clock () is often taboo, especially to the elderly, as it is a homophone of the act of attending another's funeral ().
|
6451
|
92899
|
https://en.wikipedia.org/wiki?curid=6451
|
Charles Proteus Steinmetz
|
Charles Proteus Steinmetz (born Karl August Rudolph Steinmetz; April 9, 1865 – October 26, 1923) was a Prussian-American mathematician and electrical engineer and professor at Union College. He fostered the development of alternating current that made possible the expansion of the electric power industry in the United States, formulating mathematical theories for engineers. He made ground-breaking discoveries in the understanding of hysteresis that enabled engineers to design better electromagnetic apparatus equipment, especially electric motors for use in industry.
At the time of his death, Steinmetz held over 200 patents. A genius in both mathematics and electronics, he did work that earned him the nicknames "Forger of Thunderbolts" and "The Wizard of Schenectady". Steinmetz's equation, Steinmetz solids, Steinmetz curves, and Steinmetz equivalent circuit are all named after him, as are numerous honors and scholarships, including the IEEE Charles Proteus Steinmetz Award, one of the highest technical recognitions given by the Institute of Electrical and Electronics Engineers professional society.
Early life and education.
Steinmetz was born Karl August Rudolph Steinmetz on April 9, 1865, in Breslau, Province of Silesia, Prussia (now Wrocław, Poland), the son of Caroline (Neubert) and Karl Heinrich Steinmetz. He was baptized as a Lutheran into the Evangelical Church of Prussia. Steinmetz, who stood only tall as an adult, had dwarfism, hunchback, and hip dysplasia, as did his father and grandfather. Steinmetz graduated with honors from St. John's Gymnasium in 1882.
Following Gymnasium, Steinmetz studied at the University of Breslau to begin work on his undergraduate degree in 1883. Nearing completion of his doctorate in 1888, he was forced to flee to Zurich, Switzerland, as the German government was preparing to prosecute him for his socialist activities.
Political persecution and emigration.
As socialist meetings and press had been banned in Germany, Steinmetz fled to Zurich in 1889 to escape possible arrest. Cornell University Professor Ronald R. Kline, author of "Steinmetz: Engineer and Socialist", points to other factors which reinforced Steinmetz's decision to leave his homeland such as financial problems and the prospect of a more harmonious life with his socialist friends and supporters than the stressful domestic circumstances of his father's household.
Faced with an expiring visa, he emigrated to the United States in 1889 at the age of 24. He changed his first name to "Charles" to sound more American, and chose the middle name "Proteus", a wise hunchbacked character from the "Odyssey" who knew many secrets, after an epithet bestowed upon him by his college fraternity brothers.
Political activism and beliefs.
Steinmetz was politically active in the US as a technocratic socialist for over thirty years. Following the Bolshevik introduction of a technocratic plan to electrify Russia, Steinmetz spoke of Lenin alongside Albert Einstein as the "two greatest minds of our time."
He believed in a corporatist industrial government also covering its human welfare function.
A member of the original Technical Alliance, which also included Thorstein Veblen and Leland Olds, Steinmetz had great faith in the ability of machines to eliminate human toil and create abundance for all. He put it this way: "Some day we [will] make the good things of life for everybody."
Steinmetz's techno-utopian optimism was deeply intertwined with his political beliefs, and he was convinced that the spread of electrification would inevitably steer human society toward socialism.
Electrical engineering.
Steinmetz is known for his contribution in three major fields of alternating current (AC) systems theory: hysteresis, steady-state analysis, and transients.
AC hysteresis theory.
Shortly after arriving in the United States, Steinmetz went to work for Rudolf Eickemeyer in Yonkers, New York, and published in the field of magnetic hysteresis, earning worldwide professional recognition. Eickemeyer's firm developed transformers for use in the transmission of electrical power among many other mechanical and electrical devices. In 1893 Eickemeyer's company, along with all of its patents and designs, was bought by the newly formed General Electric Company, where Steinmetz quickly became known as the engineering wizard in GE's engineering community.
AC steady state circuit theory.
Steinmetz's work revolutionized AC circuit theory and analysis, which had been carried out using complicated, time-consuming calculus-based methods. In the groundbreaking paper, "Complex Quantities and Their Use in Electrical Engineering", presented at a July 1893 meeting published in the American Institute of Electrical Engineers (AIEE), Steinmetz simplified these complicated methods to "a simple problem of algebra". He systematized the use of complex number phasor representation in electrical engineering education texts, whereby the lower-case letter "j" is used to designate the 90-degree rotation operator in AC system analysis. His seminal books and many other AIEE papers "taught a whole generation of engineers how to deal with AC phenomena".
AC transient theory.
Steinmetz also greatly advanced the understanding of lightning. His systematic experiments resulted in the first laboratory-created "man-made lightning", earning him the nickname "Forger of Thunderbolts". These were conducted in a football-field-sized laboratory at General Electric, using 120 000-volt generators. Like alternating-current pioneer Nikola Tesla, he also erected a lightning tower to attract natural lightning to study its patterns and effects, which resulted in several theories.
Professional life.
Steinmetz acted in the following professional capacities:
He was granted an honorary degree from Harvard University in 1901 and a doctorate from Union College in 1903.; other awards include the Certificate of Merit of Franklin Institute, 1908; the Elliott Cresson Medal, 1913; and the Cedergren Medal, 1914. Steinmetz was also an elected member of both the American Academy of Arts and Sciences and the American Philosophical Society.
Steinmetz wrote 13 books and 60 articles, not exclusively about engineering. He was a member and adviser to the fraternity Phi Gamma Delta at Union College, whose chapter house was one of the first electrified residences.
While serving as president of the Schenectady Board of Education, Steinmetz introduced numerous progressive reforms, including extended school hours, school meals, school nurses, special classes for the children of immigrants, and the distribution of free textbooks.
Personal life.
Steinmetz had dwarfism, standing only tall as an adult, and was affected by kyphosis like his father and grandfather. In spite of his love for children and family life, Steinmetz remained unmarried, to prevent his spinal deformity from being passed to any offspring.
When Joseph LeRoy Hayden, a loyal and hardworking lab assistant, announced that he would marry and look for his own living quarters, Steinmetz made the unusual proposal of opening his large home, complete with research lab, greenhouse, and office to the Haydens and their prospective family. Hayden favored the idea, but his future wife was wary of the unorthodox arrangement. She agreed after Steinmetz's assurance that she could run the house as she saw fit.
After an uneasy start, the arrangement worked well for all parties, especially after three Hayden children were born. Steinmetz legally adopted Joseph Hayden as his son, becoming grandfather to the youngsters, entertaining them with fantastic stories and spectacular scientific demonstrations. The unusual, harmonious living arrangement lasted for the rest of Steinmetz's life.
In 1894, Steinmetz founded the "Mohawk Aerial Navigation Company (Ltd.)", which became the first practical, active gliding club in the world. But none of its prototypes flew.
Steinmetz was a lifelong agnostic. He died on October 26, 1923, at the age of 58 and was buried in Vale Cemetery in Schenectady.
Legacy.
Steinmetz earned wide recognition among the scientific community and numerous awards and honors both during his life and posthumously.
Steinmetz's equation, derived from his experiments, defines the approximate heat energy due to magnetic hysteresis released, per cycle per unit volume of magnetic material. A Steinmetz solid is the solid body generated by the intersection of two or three cylinders of equal radius at right angles. Steinmetz's equivalent circuit is still widely used for the design and testing of induction machines.
One of the highest technical recognitions given by the Institute of Electrical and Electronics Engineers, the "IEEE Charles Proteus Steinmetz Award", is given for major contributions to standardization within the field of electrical and electronics engineering.
The Charles P. Steinmetz Memorial Lecture series was begun in his honor in 1925, sponsored by the Schenectady branch of the IEEE. Through 2017 seventy-three gatherings have taken place, held almost exclusively at Union College, featuring notable figures such as Nobel laureate experimental physicist Robert A. Millikan, helicopter inventor Igor Sikorsky, nuclear submarine pioneer Admiral Hyman G. Rickover (1963), Nobel-winning semiconductor inventor William Shockley, and Internet "founding father" Leonard Kleinrock.
Steinmetz's connection to Union is further celebrated with the annual Steinmetz Symposium, a day-long event in which Union undergraduates give presentations on research they have done. Steinmetz Hall, which houses the Union College computer center, is named after him.
The Charles P. Steinmetz Scholarship is awarded annually by the college, underwritten since its inception in 1923 by the General Electric Company. An additional Charles P. Steinmetz Memorial Scholarship was later established at Union by Marjorie Hayden, daughter of Joseph and Corrine Hayden, and is awarded to students majoring in engineering or physics.
A 1914 "Duplex Drive Brougham" Detroit Electric automobile that once belonged to Steinmetz was purchased by Union College in 1971, and restored for use in campus ceremonies. The Steinmetz car is permanently displayed in the first-floor corridor between the Wold Center and F.W. Olin building.
A Chicago public high school, Steinmetz College Prep, is named for him, as well as a Schenectady public school, the Steinmetz Career and Leadership Academy, formerly Steinmetz Middle-School.
A public park in north Schenectady, New York, was named for him in 1931.
In 1983, the US Post Office included Steinmetz in a series of postage stamps commemorating American inventors.
In May 2015, a life-size bronze statue of Charles Steinmetz meeting Thomas Edison by sculptor and caster Dexter Benedict was unveiled on a plaza on the corner of Erie Boulevards and South Ferry Street in Schenectady.
Charles Steinmetz's Mohawk River cabin is preserved and on display in the outdoor collection of historic structures in Greenfield Village, part of the Henry Ford Museum complex in Dearborn, Michigan.
In popular culture.
Steinmetz is featured in John Dos Passos's "U.S.A." trilogy in one of the biographies. He also serves as a major character in Starling Lawrence's "The Lightning Keeper".
Steinmetz is a major character in the novel "Electric City" by Elizabeth Rosner.
In the 1944 Three Stooges short "Busy Buddies", Moe Howard references Steinmetz.
Steinmetz was portrayed in 1959 by the actor Rod Steiger in the CBS television anthology series, "The Joseph Cotten Show". The episode focused on his socialist activities in Germany.
A famous anecdote about Steinmetz concerns a troubleshooting consultation at Henry Ford's River Rouge Plant. A humorous aspect of the story is the "itemized bill" he submitted for the work performed.
Bibliography.
Patents.
At the time of his death, Steinmetz held over 200 patents. Among them are:
|
6452
|
7903804
|
https://en.wikipedia.org/wiki?curid=6452
|
Charles Martel
|
Charles Martel (; – 22 October 741), "Martel" being a sobriquet in Old French for "The Hammer", was a Frankish political and military leader who, as Duke and Prince of the Franks and Mayor of the Palace, was the de facto ruler of the Franks from 718 until his death. He was a son of the Frankish statesman Pepin of Herstal and a noblewoman named Alpaida. Charles successfully asserted his claims to power as successor to his father as the power behind the throne in Frankish politics. Continuing and building on his father's work, he restored centralized government in Francia and began the series of military campaigns that re-established the Franks as the undisputed masters of all Gaul. According to a near-contemporary source, the "Liber Historiae Francorum", Charles was "a warrior who was uncommonly ... effective in battle".
Charles gained a victory against an Umayyad invasion of Aquitaine at the Battle of Tours, at a time when the Umayyad Caliphate controlled most of the Iberian Peninsula. Alongside his military endeavours, Charles has been traditionally credited with an influential role in the development of the Frankish system of feudalism.
At the end of his reign, Charles divided Francia between his sons, Carloman and Pepin. The latter became the first king of the Carolingian dynasty. Pepin's son Charlemagne, grandson of Charles, extended the Frankish realms and became the first emperor in the West since the Fall of the Western Roman Empire.
Background.
Charles, nicknamed "Martel" ("the Hammer") in later chronicles, was a son of Pepin of Herstal and his mistress, possible second wife, Alpaida. He had a brother named Childebrand, who later became the Frankish "dux" (that is, "duke") of Burgundy. Charles was a great-grandson of Arnulf of Metz.
Older historiography commonly describes Charles as "illegitimate", but the dividing line between wives and concubines was not clear-cut in eighth-century Francia. It is likely that the accusation of "illegitimacy" derives from the desire of Pepin's first wife Plectrude to see her progeny as heirs to Pepin's throne.
By Charles's lifetime the Merovingians had ceded power to the Mayors of the Palace, who controlled the royal treasury, dispensed patronage, and granted land and privileges in the name of the figurehead king. Charles's father, Pepin of Herstal, had united the Frankish realm by conquering Neustria and Burgundy. Pepin was the first to call himself Duke and Prince of the Franks, a title later taken up by Charles.
Contesting for power.
In December 714, Pepin of Herstal died. A few months before his death and shortly after the murder of his son Grimoald the Younger, he had taken the advice of his wife Plectrude to designate as his sole heir Theudoald, his grandson by their deceased son Grimoald. This was immediately opposed by the Austrasian nobles because Theudoald was a child of only eight years of age. To prevent Charles using this unrest to his own advantage, Plectrude had him imprisoned in Cologne, the city which was intended to be her capital. This prevented an uprising on his behalf in Austrasia, but not in Neustria.
Civil war of 715–718.
Pepin's death occasioned open conflict between his heirs and the Neustrian nobles who sought political independence from Austrasian control. In 715, Dagobert III named Raganfrid mayor of the palace. On 26 September 715, Raganfrid's Neustrians met the young Theudoald's forces at the Battle of Compiègne. Theudoald was defeated and fled back to Cologne. Before the end of the year, Charles had escaped from prison and been acclaimed mayor by the nobles of Austrasia. That same year, Dagobert III died and the Neustrians proclaimed Chilperic II, the cloistered son of Childeric II, as king.
Battle of Cologne.
In 716, Chilperic and Raganfrid together led an army into Austrasia intent on seizing the Pippinid wealth at Cologne. The Neustrians allied with another invading force under Radbod, King of the Frisians and met Charles in battle near Cologne, which was still held by Plectrude. Charles had little time to gather men or prepare and the result was inevitable. The Frisians held off Charles, while the king and his mayor besieged Plectrude at Cologne, where she bought them off with a substantial portion of Pepin's treasure. After that they withdrew. The Battle of Cologne is the only defeat of Charles's career.
Battle of Amblève.
Charles retreated to the hills of the Eifel to gather and train men. In April 716, he fell upon the triumphant army near Malmedy as it was returning to Neustria. In the ensuing Battle of Amblève, Charles attacked as the enemy rested at midday. According to one source, he split his forces into several groups which fell at them from many sides. Another suggests that while this was his intention, he then decided, given the enemy's unpreparedness, this was not necessary. In any event, the suddenness of the assault led them to believe they were facing a much larger host. Many of the enemy fled and Charles's troops gathered the spoils of the camp. His reputation increased considerably as a result, and he attracted more followers. This battle is often considered by historians as the turning point in Charles's struggle.
Battle of Vincy.
Richard Gerberding points out that up to this time, much of Charles's support was probably from his mother's kindred in the lands around Liège. After Amblève, he seems to have won the backing of the influential Willibrord, founder of the Abbey of Echternach. The abbey had been built on land donated by Plectrude's mother, Irmina of Oeren, but most of Willibrord's missionary work had been carried out in Frisia. In joining Chilperic and Raganfrid, Radbod of Frisia sacked Utrecht, burning churches and killing many missionaries. Willibrord and his monks were forced to flee to Echternach. Gerberding suggests that Willibrord had decided that the chances of preserving his life's work were better with a successful field commander like Charles than with Plectrude in Cologne. Willibrord subsequently baptized Charles's son Pepin. Gerberding suggests a likely date of Easter 716. Charles also received support from bishop Pepo of Verdun.
Charles took time to rally more men and prepare. By the following spring, he had attracted enough support to invade Neustria. Charles sent an envoy who proposed a cessation of hostilities if Chilperic would recognize his rights as mayor of the palace in Austrasia. The refusal was not unexpected but served to impress upon Charles's forces the unreasonableness of the Neustrians. They met near Cambrai at the Battle of Vincy on 21 March 717. The victorious Charles pursued the fleeing king and mayor to Paris, but as he was not yet prepared to hold the city, he turned back to deal with Plectrude and Cologne. He took the city and dispersed her adherents. Plectrude was allowed to retire to a convent. Theudoald lived to 741 under his uncle's protection.
Consolidation of power.
Upon this success, Charles proclaimed Chlothar IV king in Austrasia in opposition to Chilperic and deposed Rigobert, archbishop of Reims, replacing him with Milo, a lifelong supporter.
In 718, Chilperic responded to Charles's new ascendancy by making an alliance with Odo the Great (or Eudes, as he is sometimes known), the duke of Aquitaine, who had become independent during the civil war in 715, but was again defeated, at the Battle of Soissons, by Charles. Chilperic fled with his ducal ally to the land south of the Loire and Raganfrid fled to Angers. Soon Chlotar IV died and Odo surrendered King Chilperic in exchange for Charles recognizing his dukedom. Charles recognized Chilperic as king of the Franks in return for legitimate royal affirmation of his own mayoralty over all the kingdoms.
Wars of 718–732.
Between 718 and 732, Charles secured his power through a series of victories. Having unified the Franks under his banner, Charles was determined to punish the Saxons who had invaded Austrasia. Therefore, late in 718, he laid waste their country to the banks of the Weser, the Lippe, and the Ruhr. He defeated them in the Teutoburg Forest and thus secured the Frankish border.
When the Frisian leader Radbod died in 719, Charles seized West Frisia without any great resistance on the part of the Frisians, who had been subjected to the Franks but had rebelled upon the death of Pippin. When Chilperic II died in 721, Charles appointed as his successor the son of Dagobert III, Theuderic IV, who was still a minor, and who occupied the throne from 721 to 737. Charles was now appointing the kings whom he supposedly served ("rois fainéants"). By the end of his reign, he didn't appoint any at all. At this time, Charles again marched against the Saxons. Then the Neustrians rebelled under Raganfrid, who had left the county of Anjou. They were easily defeated in 724 but Raganfrid gave up his sons as hostages in turn for keeping his county. This ended the civil wars of Charles's reign.
The next six years were devoted in their entirety to assuring Frankish authority over the neighboring political groups. Between 720 and 723, Charles was fighting in Bavaria, where the Agilolfing dukes had gradually evolved into independent rulers, recently in alliance with Liutprand the Lombard. He forced the Alemanni to accompany him, and Duke Hugbert submitted to Frankish suzerainty. In 725 he brought back the Agilolfing Princess Swanachild as a second wife.
In 725 and 728, he again entered Bavaria but, in 730, he marched against Lantfrid, Duke of Alemannia, who had also become independent, and killed him in battle. He forced the Alemanni to capitulate to Frankish suzerainty and did not appoint a successor to Lantfrid. Thus, southern Germany once more became part of the Frankish kingdom, as had northern Germany during the first years of the reign.
Aquitaine and the Battle of Tours in 732.
In 731, after defeating the Saxons, Charles turned his attention to the rival southern realm of Aquitaine, and crossed the Loire, breaking the treaty with Duke Odo. The Franks ransacked Aquitaine twice, and captured Bourges, although Odo retook it. The "Continuations of Fredegar" allege that Odo called on assistance from the recently established emirate of al-Andalus, but there had been Arab raids into Aquitaine from the 720s onwards. Indeed, the anonymous Chronicle of 754 records a victory for Odo in 721 at the Battle of Toulouse, while the "Liber Pontificalis" records that Odo had killed 375,000 Saracens. It is more likely that this invasion or raid took place in revenge for Odo's support for a rebel Berber leader named Munnuza.
Whatever the precise circumstances were, it is clear that an army under the leadership of Abd al-Rahman al-Ghafiqi headed north, and after some minor engagements marched on the wealthy city of Tours. According to British medieval historian Paul Fouracre, "Their campaign should perhaps be interpreted as a long-distance raid rather than the beginning of a war". They were, however, defeated by the army of Charles at the Battle of Tours (known in France as the Battle of Poitiers), at a location between the French cities of Tours and Poitiers, in a victory described by the "Continuations of Fredegar". According to the historian Bernard Bachrach, the Arab army, mostly mounted, failed to break through the Frankish infantry. News of this battle spread, and may be recorded in Bede's "Ecclesiastical History" (Book V, ch. 23). However, it is not given prominence in Arabic sources from the period.
Despite his victory, Charles did not gain full control of Aquitaine, and Odo remained duke until 735.
Wars of 732–737.
Between his victory of 732 and 735, Charles reorganized the kingdom of Burgundy, replacing the counts and dukes with his loyal supporters, thus strengthening his hold on power. He was forced, by the ventures of Bubo, Duke of the Frisians, to invade independent-minded Frisia again in 734. In that year, he slew the duke at the Battle of the Boarn. Charles ordered the Frisian pagan shrines destroyed, and so wholly subjugated the populace that the region was peaceful for twenty years after.
In 735, Duke Odo of Aquitaine died. Though Charles wished to rule the duchy directly and went there to elicit the submission of the Aquitanians, the aristocracy proclaimed Odo's son, Hunald I of Aquitaine, as duke, and Charles and Hunald eventually recognised each other's position.
Interregnum (737–741).
In 737, at the tail end of his campaigning in Provence and Septimania, the Merovingian king, Theuderic IV, died. Charles, titling himself "maior domus" and "princeps et dux Francorum", did not appoint a new king and nobody acclaimed one. The throne lay vacant until Charles's death. The interregnum, the final four years of Charles's life, was relatively peaceful although in 738 he compelled the Saxons of Westphalia to submit and pay tribute and in 739 he checked an uprising in Provence where some rebels united under the leadership of Maurontus.
Charles used the relative peace to set about integrating the outlying realms of his empire into the Frankish church. He erected four dioceses in Bavaria (Salzburg, Regensburg, Freising, and Passau) and gave them Boniface as archbishop and metropolitan over all Germany east of the Rhine, with his seat at Mainz. Boniface had been under his protection from 723 on. Indeed, the saint himself explained to his old friend, Daniel of Winchester, that without it he could neither administer his church, defend his clergy nor prevent idolatry.
In 739, Pope Gregory III begged Charles for his aid against Liutprand, but Charles was loath to fight his onetime ally and ignored the plea. Nonetheless, the pope's request for Frankish protection showed how far Charles had come from the days when he was tottering on excommunication, and set the stage for his son and grandson to assert themselves in the peninsula.
Death and transition in rule.
Charles died on 22 October 741, at Quierzy-sur-Oise in what is today the Aisne "département" in the Picardy region of France. He was buried at Saint Denis Basilica in Paris.
His territories had been divided among his adult sons a year earlier: to Carloman he gave Austrasia, Alemannia, and Thuringia, and to Pippin the Younger Neustria, Burgundy, Provence, and Metz and Trier in the "Mosel duchy". Grifo was given several lands throughout the kingdom, but at a later date, just before Charles died.
Legacy.
Earlier in his life Charles had many internal opponents and felt the need to appoint his own kingly claimant, Chlotar IV. Later, however, the dynamics of rulership in Francia had changed, and no hallowed Merovingian ruler was required. Charles divided his realm among his sons without opposition (though he ignored his young son Bernard). For many historians, Charles laid the foundations for his son Pepin's rise to the Frankish throne in 751, and his grandson Charlemagne's imperial acclamation in 800. However, for Paul Fouracre, while Charles was "the most effective military leader in Francia", his career "finished on a note of unfinished business".
Family and children.
Charles married twice, his first wife being Rotrude of Treves, daughter either of Lambert II, Count of Hesbaye, or of Leudwinus, Count of Treves. They had the following children:
Most of the children married and had issue. Hiltrud married Odilo I (Duke of Bavaria). Landrade was once believed to have married a Sigrand (Count of Hesbania) but Sigrand's wife was more likely the sister of Rotrude. Auda married Theoderic, Count of Autun.
Charles also married a second time, to Swanhild and they had a child named Grifo.
With mistress Ruodhaid he had:
With an unnamed mistress he had:
Reputation and historiography.
Military victories.
For early medieval authors, Charles was famous for his military victories. Paul the Deacon for instance attributed a victory against the Saracens actually won by Odo of Aquitaine to Charles. However, alongside this there soon developed a darker reputation, for his alleged abuse of church property. A ninth-century text, the "Visio Eucherii", possibly written by Hincmar of Reims, portrayed Charles as suffering in hell for this reason. According to British medieval historian Paul Fouracre, this was "the single most important text in the construction of Charles's reputation as a seculariser or despoiler of church lands".
By the eighteenth century, historians such as Edward Gibbon had begun to portray the Frankish leader as the saviour of Christian Europe from a full-scale Islamic invasion.
In the nineteenth century, the German historian Heinrich Brunner argued that Charles had confiscated church lands in order to fund military reforms that allowed him to defeat the Arab conquests, in this way brilliantly combining two traditions about the ruler. However, Fouracre argued that "...there is not enough evidence to show that there was a decisive change either in the way in which the Franks fought, or in the way in which they organised the resources needed to support their warriors."
Many twentieth-century European historians continued to develop Gibbon's perspectives, such as French medievalist Christian Pfister, who wrote in 1911 that
Similarly, William E. Watson, who wrote of the battle's importance in Frankish and world history in 1993, suggested that
And in 1993, the influential political scientist Samuel Huntington saw the battle of Tours as marking the end of the "Arab and Moorish surge west and north".
Other recent historians, however, argue that the importance of the battle is dramatically overstated, both for European history in general and for Charles's reign in particular. This view is typified by Alessandro Barbero, who in 2004 wrote,
Similarly, in 2002 Tomaž Mastnak wrote:
More recently, the memory of Charles has been appropriated by far right and white nationalist groups, such as the 'Charles Martel Group' in France, and by the perpetrator of the Christchurch mosque shootings at Al Noor Mosque and Linwood Islamic Centre in Christchurch, New Zealand, in 2019. The memory of Charles is a topic of debate in contemporary French politics on both the right and the left.
Order of the Genet.
In the seventeenth century, a legend emerged that Charles had formed the first regular order of knights in France. In 1620, Andre Favyn stated (without providing a source) that among the spoils Charles's forces captured after the Battle of Tours were many genets (raised for their fur) and several of their pelts. Charles gave these furs to leaders amongst his army, forming the first order of knighthood, the Order of the Genet. Favyn's claim was then repeated and elaborated in later works in English, for instance by Elias Ashmole in 1672, and James Coats in 1725.
External links.
|
6456
|
33450425
|
https://en.wikipedia.org/wiki?curid=6456
|
Charles Edward Jones
|
Charles Edward "Chuck" Jones (November 8, 1952 – September 11, 2001) was a United States Air Force officer, an aeronautical engineer, computer programmer, and an astronaut in the USAF Manned Spaceflight Engineer Program. He was killed during the September 11 attacks, aboard American Airlines Flight 11.
Life.
Charles Edward Jones was born November 8, 1952, in Clinton, Indiana. He graduated from Wichita East High School in 1970, earned a Bachelor of Science degree in Astronautical Engineering from the United States Air Force Academy in 1974, and received a Master of Science degree in astronautics from Massachusetts Institute of Technology in 1980. He entered the USAF Manned Spaceflight Engineer Program in 1982, and was scheduled to fly on mission STS-71-B in December 1986, but the mission was canceled after the "Challenger" disaster in January 1986. He left the Manned Spaceflight Engineer program in 1987.
He later worked for Defense Intelligence Agency, Bolling Air Force Base in Washington, D.C., and was Systems Program Director for Intelligence and Information Systems, Hanscom Air Force Base, Massachusetts. Jones later was the manager of space programs for BAE Systems.
Jones was killed at the age of 48 in the attacks of September 11, 2001, aboard American Airlines Flight 11. Jones was flying that day on a routine business trip for BAE Systems, and had been living as a retired U.S. Air Force colonel in Bedford, Massachusetts, at the time of his death. He was survived by his wife Jeanette.
At the National 9/11 Memorial, Jones is memorialized at the North Pool, on Panel N-74.
Military decorations.
His awards include:
|
6458
|
28481209
|
https://en.wikipedia.org/wiki?curid=6458
|
Ceramic
|
A ceramic is any of the various hard, brittle, heat-resistant, and corrosion-resistant materials made by shaping and then firing an inorganic, nonmetallic material, such as clay, at a high temperature. Common examples are earthenware, porcelain, and brick.
The earliest ceramics made by humans were fired clay bricks used for building house walls and other structures. Other pottery objects such as pots, vessels, vases and figurines were made from clay, either by itself or mixed with other materials like silica, hardened by sintering in fire. Later, ceramics were glazed and fired to create smooth, colored surfaces, decreasing porosity through the use of glassy, amorphous ceramic coatings on top of the crystalline ceramic substrates. Ceramics now include domestic, industrial, and building products, as well as a wide range of materials developed for use in advanced ceramic engineering, such as semiconductors.
The word "ceramic" comes from the Ancient Greek word (), meaning "of or for pottery" (). The earliest known mention of the root "ceram-" is the Mycenaean Greek , workers of ceramic, written in Linear B syllabic script. The word "ceramic" can be used as an adjective to describe a material, product, or process, or it may be used as a noun, either singular or, more commonly, as the plural noun "ceramics".
Materials.
Ceramic material is an inorganic, metallic oxide, nitride, or carbide material. Some elements, such as carbon or silicon, may be considered ceramics. Ceramic materials are brittle, hard, strong in compression, and weak in shearing and tension. They withstand the chemical erosion that occurs in other materials subjected to acidic or caustic environments. Ceramics generally can withstand very high temperatures, ranging from 1,000 °C to 1,600 °C (1,800 °F to 3,000 °F).
The crystallinity of ceramic materials varies widely. Most often, fired ceramics are either vitrified or semi-vitrified, as is the case with earthenware, stoneware, and porcelain. Varying crystallinity and electron composition in the ionic and covalent bonds cause most ceramic materials to be good thermal and electrical insulators (researched in ceramic engineering). With such a large range of possible options for the composition/structure of a ceramic (nearly all of the elements, nearly all types of bonding, and all levels of crystallinity), the breadth of the subject is vast, and identifiable attributes (hardness, toughness, electrical conductivity) are difficult to specify for the group as a whole. General properties such as high melting temperature, high hardness, poor conductivity, high moduli of elasticity, chemical resistance, and low ductility are the norm, with known exceptions to each of these rules (piezoelectric ceramics, low glass transition temperature ceramics, superconductive ceramics).
Composites such as fiberglass and carbon fiber, while containing ceramic materials, are not considered to be part of the ceramic family.
Highly oriented crystalline ceramic materials are not amenable to a great range of processing. Methods for dealing with them tend to fall into one of two categories: either making the ceramic in the desired shape by reaction "in situ" or "forming" powders into the desired shape and then sintering to form a solid body. Ceramic forming techniques include shaping by hand (sometimes including a rotation process called "throwing"), slip casting, tape casting (used for making very thin ceramic capacitors), injection molding, dry pressing, and other variations.
Many ceramics experts do not consider materials with an amorphous (noncrystalline) character (i.e., glass) to be ceramics, even though glassmaking involves several steps of the ceramic process and its mechanical properties are similar to those of ceramic materials. However, heat treatments can convert glass into a semi-crystalline material known as glass-ceramic.
Traditional ceramic raw materials include clay minerals such as kaolinite, whereas more recent materials include aluminium oxide, more commonly known as alumina. Modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide. Both are valued for their abrasion resistance and are therefore used in applications such as the wear plates of crushing equipment in mining operations. Advanced ceramics are also used in the medical, electrical, electronics, and armor industries.
History.
Human beings appear to have been making their own ceramics for at least 26,000 years, subjecting clay and silica to intense heat to fuse and form ceramic materials. The earliest found so far were in southern central Europe and were sculpted figures, not dishes. The earliest known pottery was made by mixing animal products with clay and firing it at up to . While pottery fragments have been found up to 19,000 years old, it was not until about 10,000 years later that regular pottery became common. An early people that spread across much of Europe is named after its use of pottery: the Corded Ware culture. These early Indo-European peoples decorated their pottery by wrapping it with rope while it was still wet. When the ceramics were fired, the rope burned off but left a decorative pattern of complex grooves on the surface.
The invention of the wheel eventually led to the production of smoother, more even pottery using the wheel-forming (throwing) technique, like the pottery wheel. Early ceramics were porous, absorbing water easily. It became useful for more items with the discovery of glazing techniques, which involved coating pottery with silicon, bone ash, or other materials that could melt and reform into a glassy surface, making a vessel less pervious to water.
Archaeology.
Ceramic artifacts have an important role in archaeology for understanding the culture, technology, and behavior of peoples of the past. They are among the most common artifacts to be found at an archaeological site, generally in the form of small fragments of broken pottery called sherds. The processing of collected sherds can be consistent with two main types of analysis: technical and traditional.
The traditional analysis involves sorting ceramic artifacts, sherds, and larger fragments into specific types based on style, composition, manufacturing, and morphology. By creating these typologies, it is possible to distinguish between different cultural styles, the purpose of the ceramic, and the technological state of the people, among other conclusions. Besides, by looking at stylistic changes in ceramics over time, it is possible to separate (seriate) the ceramics into distinct diagnostic groups (assemblages). A comparison of ceramic artifacts with known dated assemblages allows for a chronological assignment of these pieces.
The technical approach to ceramic analysis involves a finer examination of the composition of ceramic artifacts and sherds to determine the source of the material and, through this, the possible manufacturing site. Key criteria are the composition of the clay and the temper used in the manufacture of the article under study: the temper is a material added to the clay during the initial production stage and is used to aid the subsequent drying process. Types of temper include shell pieces, granite fragments, and ground sherd pieces called 'grog'. Temper is usually identified by microscopic examination of the tempered material. Clay identification is determined by a process of refiring the ceramic and assigning a color to it using Munsell Soil Color notation. By estimating both the clay and temper compositions and locating a region where both are known to occur, an assignment of the material source can be made. Based on the source assignment of the artifact, further investigations can be made into the site of manufacture.
Properties.
The physical properties of any ceramic substance are a direct result of its crystalline structure and chemical composition. Solid-state chemistry reveals the fundamental connection between microstructure and properties, such as localized density variations, grain size distribution, type of porosity, and second-phase content, which can all be correlated with ceramic properties such as mechanical strength σ by the Hall-Petch equation, hardness, toughness, dielectric constant, and the optical properties exhibited by transparent materials.
Ceramography is the art and science of preparation, examination, and evaluation of ceramic microstructures. Evaluation and characterization of ceramic microstructures are often implemented on similar spatial scales to that used commonly in the emerging field of nanotechnology: from nanometers to tens of micrometers (µm). This is typically somewhere between the minimum wavelength of visible light and the resolution limit of the naked eye.
The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks, structural defects, and hardness micro indentions. Most bulk mechanical, optical, thermal, electrical, and magnetic properties are significantly affected by the observed microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the cleaved and polished microstructure. Physical properties which constitute the field of materials science and engineering include the following:
Mechanical properties.
Mechanical properties are important in structural and building materials as well as textile fabrics. In modern materials science, fracture mechanics is an important tool in improving the mechanical performance of materials and components. It applies the physics of stress and strain, in particular the theories of elasticity and plasticity, to the microscopic crystallographic defects found in real materials in order to predict the macroscopic mechanical failure of bodies. Fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real-life failures.
Ceramic materials are usually ionic or covalent bonded materials. A material held together by either type of bond will tend to fracture before any plastic deformation takes place, which results in poor toughness and brittle behavior in these materials. Additionally, because these materials tend to be porous, pores and other microscopic imperfections act as stress concentrators, decreasing the toughness further, and reducing the tensile strength. These combine to give catastrophic failures, as opposed to the more ductile failure modes of metals.
These materials do show plastic deformation. However, because of the rigid structure of crystalline material, there are very few available slip systems for dislocations to move, and so they deform very slowly.
To overcome the brittle behavior, ceramic material development has introduced the class of ceramic matrix composite materials, in which ceramic fibers are embedded and with specific coatings are forming fiber bridges across any crack. This mechanism substantially increases the fracture toughness of such ceramics. Ceramic disc brakes are an example of using a ceramic matrix composite material manufactured with a specific process.
Scientists are working on developing ceramic materials that can withstand significant deformation without breaking. A first such material that can deform in room temperature was found in 2024.
Toughening Mechanisms.
Many strategies are employed to improve the toughness of ceramics to prevent fracture. This includes crack deflection, microcrack toughening, crack bridging, incorporation of ductile particles, and transformation toughening.
Crack deflection is a toughening mechanism that involves deflecting cracks away from more rapid crack propagation paths, preventing catastrophic sudden failure. Cracks may be deflected using microstructures such as whiskers, as in the use of silicon carbide whiskers to reinforce molybdenum disilicide ceramic material in a 1987 paper. Crack deflecting second phases may also take the form of platelets, particles, or fibers.
Microcrack toughening involves nucleation (creation) of microcracks near a macroscopic crack tip where the crack propagates, which lowers the stress experienced by the tip and therefore the urgency of crack propagation. To improve toughness, second phase particles can be incorporated into ceramic such that they are subject to microcracking, which relieves stress to prevent fracture.
Crack bridging occurs when a strong discontinuous reinforcing phase applies a force behind the propagating tip of the crack that discourages further cracking. These second phase bridges essentially pin the crack to discourage its extension. Crack bridging can be used to improve toughness via the incorporation of second phase whiskers in the ceramic, as well as other shapes, to bridge cracks.
Ductile particle ceramic matrix composites are composed of ductile particles such as metals distributed in a ceramic matrix. These particles boost toughness by deforming plastically to absorb energy, and by bridging advancing cracks. To be most effective, the particles should be isolated from each other. The most studied iterations of these composites consist of an alumina matrix, and nickel, iron, molybdenum, copper, or silver metal particles.
Transformation toughening occurs when a material undergoes stress-induced phase transformation. Some ceramics are capable of undergoing stress-induced martensitic transformation, which involves an energy barrier that must be overcome by absorbing energy. Martensitic transformations are diffusionless shear transformations involving the transition between an "austenite" or "parent" phase that is stable at higher temperatures and a "martensitic" phase that is stable at lower temperatures. Because the transformation absorbs energy, stress-induced martensitic transformations can hinder crack progression and increases toughness. A key example of this phenomenon is zirconia, whose martensitic transformation involves a crystal structure transformation from a tetragonal crystal structure (the austenite phase) to a monoclinic structure. The volume increase associated with transformation from tetragonal to monoclinic also relieves tensile stress at the crack, tip, further discouraging cracking and increasing toughness. When zirconia particles in a ceramic matrix undergo transformation during fabrication due to cooling , the stress fields around the particles lead to nucleation and extension of microcracks, which can also improve toughness of the material. These stress fields, as well as the particles themselves, can also contribute to crack deflection.
Ice-templating for enhanced mechanical properties.
If a ceramic is subjected to substantial mechanical loading, it can undergo a process called ice-templating, which allows some control of the microstructure of the ceramic product and therefore some control of the mechanical properties. Ceramic engineers use this technique to tune the mechanical properties to their desired application. Specifically, the strength is increased when this technique is employed. Ice templating allows the creation of macroscopic pores in a unidirectional arrangement. The applications of this oxide strengthening technique are important for solid oxide fuel cells and water filtration devices.
To process a sample through ice templating, an aqueous colloidal suspension is prepared to contain the dissolved ceramic powder evenly dispersed throughout the colloid, for example yttria-stabilized zirconia (YSZ). The solution is then cooled from the bottom to the top on a platform that allows for unidirectional cooling. This forces ice crystals to grow in compliance with the unidirectional cooling, and these ice crystals force the dissolved YSZ particles to the solidification front of the solid-liquid interphase boundary, resulting in pure ice crystals lined up unidirectionally alongside concentrated pockets of colloidal particles. The sample is then heated and at the same the pressure is reduced enough to force the ice crystals to sublime and the YSZ pockets begin to anneal together to form macroscopically aligned ceramic microstructures. The sample is then further sintered to complete the evaporation of the residual water and the final consolidation of the ceramic microstructure.
During ice-templating, a few variables can be controlled to influence the pore size and morphology of the microstructure. These important variables are the initial solids loading of the colloid, the cooling rate, the sintering temperature and duration, and the use of certain additives which can influence the microstructural morphology during the process. A good understanding of these parameters is essential to understanding the relationships between processing, microstructure, and mechanical properties of anisotropically porous materials.
Electrical properties.
Semiconductors.
Some ceramics are semiconductors. Most of these are transition metal oxides that are II-VI semiconductors, such as zinc oxide. While there are prospects of mass-producing blue light-emitting diodes (LED) from zinc oxide, ceramicists are most interested in the electrical properties that show grain boundary effects. One of the most widely used of these is the varistor. These are devices that exhibit the property that resistance drops sharply at a certain threshold voltage. Once the voltage across the device reaches the threshold, there is a breakdown of the electrical structure in the vicinity of the grain boundaries, which results in its electrical resistance dropping from several megohms down to a few hundred ohms. The major advantage of these is that they can dissipate a lot of energy, and they self-reset; after the voltage across the device drops below the threshold, its resistance returns to being high. This makes them ideal for surge-protection applications; as there is control over the threshold voltage and energy tolerance, they find use in all sorts of applications. The best demonstration of their ability can be found in electrical substations, where they are employed to protect the infrastructure from lightning strikes. They have rapid response, are low maintenance, and do not appreciably degrade from use, making them virtually ideal devices for this application. Semiconducting ceramics are also employed as gas sensors. When various gases are passed over a polycrystalline ceramic, its electrical resistance changes. With tuning to the possible gas mixtures, very inexpensive devices can be produced.
Superconductivity.
Under some conditions, such as extremely low temperatures, some ceramics exhibit high-temperature superconductivity (in superconductivity, "high temperature" means above 30 K). The reason for this is not understood, but there are two major families of superconducting ceramics.
Ferroelectricity and supersets.
Piezoelectricity, a link between electrical and mechanical response, is exhibited by a large number of ceramic materials, including the quartz used to measure time in watches and other electronics. Such devices use both properties of piezoelectrics, using electricity to produce a mechanical motion (powering the device) and then using this mechanical motion to produce electricity (generating a signal). The unit of time measured is the natural interval required for electricity to be converted into mechanical energy and back again.
The piezoelectric effect is generally stronger in materials that also exhibit pyroelectricity, and all pyroelectric materials are also piezoelectric. These materials can be used to inter-convert between thermal, mechanical, or electrical energy; for instance, after synthesis in a furnace, a pyroelectric crystal allowed to cool under no applied stress generally builds up a static charge of thousands of volts. Such materials are used in motion sensors, where the tiny rise in temperature from a warm body entering the room is enough to produce a measurable voltage in the crystal.
In turn, pyroelectricity is seen most strongly in materials that also display the ferroelectric effect, in which a stable electric dipole can be oriented or reversed by applying an electrostatic field. Pyroelectricity is also a necessary consequence of ferroelectricity. This can be used to store information in ferroelectric capacitors, elements of ferroelectric RAM.
The most common such materials are lead zirconate titanate and barium titanate. Aside from the uses mentioned above, their strong piezoelectric response is exploited in the design of high-frequency loudspeakers, transducers for sonar, and actuators for atomic force and scanning tunneling microscopes.
Positive thermal coefficient.
Temperature increases can cause grain boundaries to suddenly become insulating in some semiconducting ceramic materials, mostly mixtures of heavy metal titanates. The critical transition temperature can be adjusted over a wide range by variations in chemistry. In such materials, current will pass through the material until joule heating brings it to the transition temperature, at which point the circuit will be broken and current flow will cease. Such ceramics are used as self-controlled heating elements in, for example, the rear-window defrost circuits of automobiles.
At the transition temperature, the material's dielectric response becomes theoretically infinite. While a lack of temperature control would rule out any practical use of the material near its critical temperature, the dielectric effect remains exceptionally strong even at much higher temperatures. Titanates with critical temperatures far below room temperature have become synonymous with "ceramic" in the context of ceramic capacitors for just this reason.
Optical properties.
Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Frequency selective optical filters can be utilized to alter or enhance the brightness and contrast of a digital image. Guided lightwave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation, though low powered, is virtually lossless. Optical waveguides are used as components in Integrated optical circuits (e.g. light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems. Also of value to the emerging materials scientist is the sensitivity of materials to radiation in the thermal infrared (IR) portion of the electromagnetic spectrum. This heat-seeking ability is responsible for such diverse optical phenomena as night-vision and IR luminescence.
Thus, there is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light (electromagnetic waves) in the visible (0.4 – 0.7 micrometers) and mid-infrared (1 – 5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armor, including next-generation high-speed missiles and pods, as well as protection against improvised explosive devices (IED).
In the 1960s, scientists at General Electric (GE) discovered that under the right manufacturing conditions, some ceramics, especially aluminium oxide (alumina), could be made translucent. These translucent materials were transparent enough to be used for containing the electrical plasma generated in high-pressure sodium street lamps. During the past two decades, additional types of transparent ceramics have been developed for applications such as nose cones for heat-seeking missiles, windows for fighter aircraft, and scintillation counters for computed tomography scanners. Other ceramic materials, generally requiring greater purity in their make-up than those above, include forms of several chemical compounds, including:
Products.
By usage.
For convenience, ceramic products are usually divided into four main types; these are shown below with some examples:
Ceramics made with clay.
Frequently, the raw materials of modern ceramics do not include clays.
Those that do have been classified as:
Classification.
Ceramics can also be classified into three distinct material categories:
Each one of these classes can be developed into unique material properties.
|
6459
|
27823944
|
https://en.wikipedia.org/wiki?curid=6459
|
Wuxing (Chinese philosophy)
|
(), usually translated as Five Phases or Five Agents, is a fivefold conceptual scheme used in many traditional Chinese fields of study to explain a wide array of phenomena, including terrestrial and celestial relationships, influences, and cycles, that characterise the interactions and relationships within science, medicine, politics, religion and social relationships and education within Chinese culture.
The five agents are traditionally associated with the classical planets: Mars, Mercury, Jupiter, Venus, and Saturn as depicted in the etymological section below. In ancient Chinese astronomy and astrology, that spread throughout East Asia, was a reflection of the seven-day planetary order of Fire, Water, Wood, Metal, Earth. When in their "heavenly stems" generative cycle as represented in the below cycles section and depicted in the diagram above running consecutively clockwise (Wood, Fire, Earth, Metal, Water). When in their overacting destructive arrangement of Wood, Earth, Water, Fire, Metal, natural disasters, calamity, illnesses and disease will ensue.
The "wuxing" system has been in use since the second or first century BCE during the Han dynasty. It appears in many seemingly disparate fields of early Chinese thought, including music, feng shui, alchemy, astrology, martial arts, military strategy, "I Ching" divination, religion and traditional medicine, serving as a metaphysics based on cosmic analogy.
Etymology.
"Wuxing" originally referred to the five classical planets (from brightest to dimmest: Venus, Jupiter, Mercury, Mars, Saturn), which were with the combination of the Sun and the Moon, conceived as creating the five forces of earthly life (including yang and yin). This is why the word is composed of Chinese characters meaning "five" () and "moving" (). "Moving" is shorthand for "planets", since the word for planets in Chinese has been translated as "moving stars" (). Some of the Mawangdui Silk Texts (before 168 BC) also connect the "wuxing" to the "wude" (), the Five Virtues and Five Emotions . Scholars believe that various predecessors to the concept of "wuxing" were merged into one system of many interpretations in the Han dynasty.
"Wuxing" was first translated into English as "the Five Elements", drawing parallels with the Greek and Indian Vedic static, solid or formative arrangement of the four elements. This translation is still in common use among practitioners of Traditional Chinese medicine, such as in the name of Five Element acupuncture and Japanese meridian therapy. However, this analogy could be misleading as the four elements are concerned with form, substance and quantity, whereas the post-heaven arrangement of the "wuxing" are "primarily concerned with process, change, and quality". For example, the "wuxing" element "Wood" is more accurately thought of as the "vital essence" and growth of trees rather than the physical innate substance wood. This led sinologist Nathan Sivin to propose the alternative translation "five phases" in 1987. But "phase" also fails to capture the full meaning of "wuxing". In some contexts, the "wuxing" are indeed associated with physical substances. Historian of Chinese medicine Manfred Porkert proposed the (somewhat unwieldy) term "Evolutive Phase". Perhaps the most widely accepted translation among modern scholars is the "five agents" or "five transformations".
Cycles.
In traditional doctrine, the five phases are connected in two cycles of interactions: a promoting or generative ( "shēng") cycle, also known as "mother-son"; and an overacting or destructive ( "kè") cycle, also known as "grandfather-grandson" (see diagram). Each of these cycles can be interpreted and analyzed in a forward or reversed direction. In addition to the aforementioned cycles there is also what is considered an "overacting" or excessively generating version of the destructive cycle.
Inter-promoting.
The generative cycle ( "xiāngshēng") is:
Inter-regulating.
The destructive cycle ( "xiāngkè") is:
Overacting.
The excessive destructive cycle ( "xiāngchéng") is:
Weakening.
The reverse generative cycle (/ "xiāngxiè") is:
Counteracting.
A reverse or deficient destructive cycle ( "xiāngwǔ" or "xiānghào") is:
Celestial stem.
Ming nayin.
In Ziwei divination, "nayin" () further classifies the Five Elements into 60 "ming" (), or life orders, based on the ganzhi. Similar to the astrology zodiac, the "ming" is used by fortune-tellers to analyse individual personality and destiny.
Applications.
The "wuxing" schema is applied to explain phenomena in various fields.
Phases of the year.
The five phases are around 73 days each and are usually used to describe the transformations of nature rather than their formative states.
Cosmology and feng shui.
The art of feng shui (Chinese geomancy) is based on "wuxing", with the structure of the cosmos mirroring the five phases, as well as bagua (the eight trigrams). Each phase has a complex network of associations with different aspects of nature (see table): colors, seasons and shapes all interact according to the cycles.
An interaction or energy flow can be expansive, destructive, or exhaustive, depending on the cycle to which it belongs. By understanding these energy flows, a feng shui practitioner attempts to rearrange energy to benefit the client.
Dynastic transitions.
According to the Warring States period political philosopher Zou Yan ( BCE), each of the five elements possesses a personified virtue (), which indicates the foreordained destiny () of a dynasty; hence the cyclic succession of the elements also indicates dynastic transitions. Zou Yan claims that the Mandate of Heaven sanctions the legitimacy of a dynasty by sending self-manifesting auspicious signs in the ritual color (white, green, black, red, and yellow) that matches the element of the new dynasty (Metal, Wood, Water, Fire, and Earth). From the Qin dynasty onward, most Chinese dynasties invoked the theory of the Five Elements to legitimize their reign.
Chinese medicine.
The interdependence of "zangfu" networks in the body was said to be a circle of five things, and so mapped by the ancient Chinese doctors onto categories of syndromes and patterns called the five phases.
In order to explain the integrity and complexity of the human body, Chinese medical scientists and physicians use the Five Elements theory to classify the human body's endogenous influences on organs, physiological activities, pathological reactions, and environmental or exogenous (external, environmental) influences. This diagnostic capacity is extensively used in traditional five phase acupuncture today, as opposed to the modern Confucian styled eight principles based Traditional Chinese medicine. In combination the two systems are a formative and functional study of postnatal and prenatal influencing on genetics in the form of epigenetics, biology, physiology psychology, sociology and ecology.
Music.
The "Huainanzi" and the "Yueling" chapter () of the "Book of Rites" make the following correlations:
Martial arts.
Wuxing being an influential philosophical concept, there are several Chinese martial arts and a few other east Asian styles that incorporate five phases concepts into their systems.
Tai chi trains and focuses on five basic qualities as part of its overarching strategy.
The Five Steps () are:
These five steps are not mutable states in tai chi.
Xingyi Quan uses the five elements metaphorically to represent ideally five different energies, but energy work is subtle, so normally one starts out learning five basic techniques with complementary footwork to teach the basic concepts behind the energies. Ideally one can use any technique with any kind of energy, but there are different levels of skill one must go through.
In Xingyi Quan, realization of the five energies has three basic levels: Obvious power, subtle power, mysterious power.
The Five Animals in Shaolin martial arts are an extension of the Wuxing theory as their qualities are the embodiment and representation of the energetic qualities of the five phases in the animal kingdom. They are the,
"Wuxing Heqidao", (Gogyo Aikido 五行合气道) is a life art with roots in Confucian, Taoists and Buddhist theory. It centers around applied peace and health studies rather than defence or physical action. It emphasizes the unification of mind, body and environment using the physiological theory of yin and yang as well as five-element Traditional Chinese medicine. Its movements, exercises, and teachings cultivate, direct, and harmonise the "qi".
Gogyo.
The Japanese term is "gogyo" (Japanese: , gogyō). During the 5th and 6th centuries (Kofun period), Japan adopted various philosophical disciplines such as Taoism, Chinese Buddhism and Confucianism through monks and physicians from China helping to evolve the "Onmyōdō" system. As opposed to theory of Godai that is form based philosophy that was introduced to Japan through India and Tibetan Buddhism. These theories have been extensively practiced in Japanese acupuncture and traditional Kampo medicine.
|
6462
|
49473253
|
https://en.wikipedia.org/wiki?curid=6462
|
Church of Christ, Scientist
|
The Church of Christ, Scientist was founded in 1879 in Boston, Massachusetts, by Mary Baker Eddy, author of "Science and Health with Key to the Scriptures," and founder of Christian Science. The church was founded "to commemorate the word and works of Christ Jesus" and "reinstate primitive Christianity and its lost element of healing".
In the early decades of the 20th century, Christian Science churches were founded in communities around the world, though in the last several decades of that century, there was a marked decline in membership, except in Africa, where there has been growth. Headquartered in Boston, the church does not officially report membership, and estimates as to worldwide membership range from under 100,000 to about 400,000. In 2010, there were 1,153 churches in the United States.
History.
The church was incorporated by Mary Baker Eddy in 1879, following a claimed personal healing in 1866, which she said resulted from reading the Bible. The Bible and Eddy's textbook on Christian healing, "Science and Health with Key to the Scriptures", are together the church's key doctrinal sources and have been ordained as the church's "dual impersonal pastor".
The First Church of Christ, Scientist publishes the weekly newspaper "The Christian Science Monitor" in print and online.
Beliefs and practices.
Christian Scientists believe that prayer is effective for healing diseases. The Church has collected over 50,000 testimonies of incidents that it considers healing through Christian Science treatment alone. While most of these testimonies represent ailments neither diagnosed nor treated by medical professionals, the Church requires three other people to vouch for any testimony published in any of its official organs, including the "Christian Science Journal", "Christian Science Sentinel", and "Herald of Christian Science"; verifiers say that they witnessed the healing or know the testifier well enough to vouch for them.
A Christian Science practitioner is someone who devotes their full time to prayer for others, but they do not use drugs or make medical diagnoses. Christian Scientists may take an intensive two-week "Primary" class from an authorized Christian Science teacher. Those who wish to become ""Journal"-listed" (accredited) practitioners, devoting themselves full-time to the practice of healing, must first have Primary class instruction. When they have what the church regards as a record of healing, they may submit their names for publication in the directory of practitioners and teachers in the "Christian Science Journal". A practitioner who has been listed for at least three years may apply for "Normal" class instruction, given once every three years. Those who receive a certificate are authorized to teach. Both Primary and Normal classes are based on the Bible and the writings of Mary Baker Eddy. The Primary class focuses on the chapter "Recapitulation" in "Science and Health with Key to the Scriptures". This chapter uses the Socratic method of teaching and contains the "Scientific Statement of Being". The "Normal" class focuses on the platform of Christian Science, contained on pages 330-340 of "Science and Health."
Organization.
The First Church of Christ, Scientist is the legal title of The Mother Church and administrative headquarters of the Christian Science Church. The Mary Baker Eddy Library for the Betterment of Humanity is housed in an 11-story structure originally built for The Christian Science Publishing Society.
An international newspaper, "The Christian Science Monitor", founded by Eddy in 1908 and winner of seven Pulitzer Prizes, is published by the church through the Christian Science Publishing Society.
Board of directors.
The Christian Science Board of Directors is a five-person executive entity created by Mary Baker Eddy to conduct the business of the Christian Science Church under the terms defined in the by-laws of the "Church Manual". Its functions and restrictions are defined by the "Manual".
Controversies.
Broadcasting.
Beginning in the mid-1980s, church executives undertook a controversial and ambitious foray into electronic broadcast media. The first significant effort was to create a weekly half-hour syndicated television program, "The Christian Science Monitor" Reports. "Monitor Reports" was anchored in its first season by newspaper veteran Rob Nelson. He was replaced in the second by the "Christian Science Monitor"'s former Moscow correspondent, David Willis.
In October 1991, Christian Science Monitor anchor John Hart, who is not a Christian Scientist, resigned following professional disputes with the Monitor regarding Christian Science teachings and his journalistic independence.
The hundreds of millions lost on broadcasting brought the church to the brink of bankruptcy. However, with the 1991 publication of "The Destiny of The Mother Church" by the late Bliss Knapp, the church secured a $90 million bequest from the Knapp trust. The trust dictated that the book be published as "Authorized Literature", with neither modification nor comment. Historically, the church had censured Knapp for deviating at several points from Eddy's teaching, and had refused to publish the work. The church's archivist, fired in anticipation of the book's publication, wrote to branch churches to inform them of the book's history. Many Christian Scientists thought the book violated the church's by-laws, and the editors of the church's religious periodicals and several other church employees resigned in protest. Alternate beneficiaries subsequently sued to contest the church's claim it had complied fully with the will's terms, and the church ultimately received only half of the original sum.
The fallout of the broadcasting debacle also sparked a minor revolt among some prominent church members. In late 1993, a group of Christian Scientists filed suit against the Board of Directors, alleging a willful disregard for the "Manual of The Mother Church" in its financial dealings. The suit was thrown out by the Supreme Judicial Court of Massachusetts in 1997, but a lingering discontent with the church's financial matters persists to this day." The Destiny Of The Mother Church" ceased publication in September 2023.
Membership decline and financial setbacks.
In spite of its early meteoric rise, church membership has declined over the past eight decades, according to the church's former treasurer, J. Edward Odegaard. Though the Church is prohibited by the Manual from publishing membership figures, the number of branch churches in the United States has fallen steadily since World War II. In 2009, for the first time in church history, more new members came from Africa than the United States.
In 2005, "The Boston Globe" reported that the church was considering consolidating Boston operations into fewer buildings and leasing out space in buildings it owned. Church official Philip G. Davis noted that the administration and Colonnade buildings had not been fully used for many years and that vacancy increased after staff reductions in 2004. The church posted an $8 million financial loss in fiscal 2003, and in 2004 cut 125 jobs, a quarter of the staff, at the "Christian Science Monitor". Conversely, Davis noted that "the financial situation right now is excellent" and stated that the church was not facing financial problems.
Use of spiritual healing in place of medical treatment.
The use of prayer, often in place of medical treatment, has been an area of controversy since the founding of the church; and the legality of practicing Christian Science was raised as early as 1887, when some Christian Science practitioners were charged with practicing medicine without a license. Avoidance of medical care is not a doctrinal obligation and is considered a personal choice. However, during the 1980s and 1990s in the United States, a number of Christian Scientist parents whose children died from lack of access to medical treatment were the subject of considerable controversy and were charged with manslaughter or even murder, but the outcomes of the cases were inconsistent. The lack of consensus regarding medical care is reflected in the laws of various U.S. states, which have also been inconsistent regarding religious exemptions from medical care.
|
6466
|
5846
|
https://en.wikipedia.org/wiki?curid=6466
|
Connecticut
|
Connecticut ( ) is a state in the New England region of the Northeastern United States. It borders Rhode Island to the east, Massachusetts to the north, New York to the west, and Long Island Sound to the south. Its capital is Hartford, and its most populous city is Bridgeport. Connecticut lies between the major hubs of New York City and Boston along the Northeast Corridor, where the New York-Newark Combined Statistical Area, which includes four of Connecticut's seven largest cities, extends into the southwestern part of the state. Connecticut is the third-smallest state by area after Rhode Island and Delaware, and the 29th most populous with more than 3.6 million residents as of 2024, ranking it fourth among the most densely populated U.S. states.
The state is named after the Connecticut River, the longest in New England, which roughly bisects the state and drains into the Long Island Sound between the towns of Old Saybrook and Old Lyme. The name of the river is in turn derived from anglicized spellings of , a Mohegan-Pequot word for "long tidal river". Before the arrival of the first European settlers, the region was inhabited by various Algonquian tribes. In 1633, the Dutch West India Company established a small, short-lived settlement called House of Hope in Hartford. Half of Connecticut was initially claimed by the Dutch colony New Netherland, which included much of the land between the Connecticut and Delaware Rivers, although the first major settlements were established by the English around the same time. Thomas Hooker led a band of followers from the Massachusetts Bay Colony to form the Connecticut Colony, while other settlers from Massachusetts founded the Saybrook Colony and the New Haven Colony; both had merged into the first by 1664.
Connecticut's official nickname, the "Constitution State", refers to the Fundamental Orders adopted by the Connecticut Colony in 1639, which is considered by some to be the first written constitution in Western history. As one of the Thirteen Colonies that rejected British rule during the American Revolution, Connecticut was influential in the development of the federal government of the United States. In 1787, Roger Sherman and Oliver Ellsworth, state delegates to the Constitutional Convention, proposed a compromise between the Virginia and New Jersey Plans; its bicameral structure for Congress, with a respectively proportional and equal representation of the states in the House of Representatives and Senate, was adopted and remains to this day. In January 1788, Connecticut became the fifth state to ratify the Constitution.
Connecticut is a developed and affluent state, performing well on the Human Development Index and on different metrics of income except for equality. It is home to a number of prestigious educational institutions, including Yale University in New Haven, as well as other liberal arts colleges and private boarding schools in and around the "Knowledge Corridor". Due to its geography, Connecticut has maintained a strong maritime tradition; the United States Coast Guard Academy is located in New London by the Thames River. The state is also associated with the aerospace industry through major companies Pratt & Whitney and Sikorsky Aircraft headquartered in East Hartford and Stratford, respectively. Historically a manufacturing center for arms, hardware, and timepieces, Connecticut, as with the rest of the region, had transitioned into an economy based on the financial, insurance, and real estate sectors; many multinational firms providing such services can be found concentrated in the state capital of Hartford and along the Gold Coast in Fairfield County.
History.
First people.
The name Connecticut is derived from the Mohegan-Pequot word that has been translated as "long tidal river" and "upon the long river", both referring to the Connecticut River. Evidence of human presence in the Connecticut region dates to as far back as 10,000 years ago. Stone tools were used for hunting, fishing, and woodworking. Semi-nomadic in lifestyle, these peoples moved seasonally to take advantage of various resources in the area. They shared languages based on Algonquian. The Connecticut region was inhabited by many Native American tribes that can be grouped into the Nipmuc, the Sequin or "River Indians" (which included the Tunxis, Schaghticoke, Podunk, Wangunk, Hammonasset, and Quinnipiac), the Mattabesec or "Wappinger Confederacy" and the Pequot-Mohegan. Some of these groups still reside in Connecticut, including the Mohegans, the Pequots, and the Paugusetts.
Colonial period.
Dutchman Adriaen Block was the first European explorer in Connecticut. He explored the region in 1614. Dutch fur traders then sailed up the Connecticut River, calling it Versche Rivier ("Fresh River") and building a fort at Dutch Point in Hartford, which they named "House of Hope" ().
The Connecticut Colony originally consisted of several smaller settlements in Windsor, Wethersfield, Saybrook, Hartford, and New Haven. The first English settlers came in 1633 and settled at Windsor, then at Wethersfield the following year. John Winthrop the Younger of Massachusetts received a commission to create Saybrook Colony at the mouth of the Connecticut River in 1635.
A large group of Puritans arrived in 1636 from Massachusetts Bay Colony, led by Thomas Hooker, who established the Connecticut Colony at Hartford. The Fundamental Orders of Connecticut were adopted in January 1639, and have been described as the first constitutional document in America.
The Quinnipiack Colony was established by John Davenport, Theophilus Eaton, and others at New Haven in March 1638. The New Haven Colony had its own constitution called "The Fundamental Agreement of the New Haven Colony", signed on June 4, 1639.
Each settlement was an independent political entity, established without official sanction of the English Crown. In 1662, Winthrop traveled to England and obtained a charter from CharlesII which united the settlements of Connecticut. Historically significant colonial settlements included Windsor (1633), Wethersfield (1634), Saybrook (1635), Hartford (1636), New Haven (1638), Fairfield (1639), Guilford (1639), Milford (1639), Stratford (1639), Farmington (1640), Stamford (1641), and New London (1646).
The Pequot War marked the first significant clash between colonists and Native Americans in New England. The Pequot had been aggressively extending their area of control at the expense of the Wampanoag to the north, Narragansett (east), Connecticut River Valley Algonquian tribes and the Mohegan (west), and Lenape Algonquian people (south). Meanwhile, the Pequot had been reacting with increasing aggression to colonial territorial expansion. In response to the 1636 murder of an English privateer and his crew, followed by the murder of a trader, colonists raided a Pequot village on Block Island. The Pequots laid siege to Saybrook Colony's garrison that autumn, then raided Wethersfield in the spring of 1637. Organizing a band of militia and allies from the Mohegan and Narragansett tribes, colonists declared war and attacked a Pequot village on the Mystic River. Death toll estimates range between 300 and 700 Pequots. After suffering another major loss at a battle in Fairfield, the Pequots sued for peace.
Connecticut's original Charter in 1662 granted it all the land to the "South Sea"—that is, to the Pacific Ocean. The Hartford Treaty with the Dutch was signed on September 19, 1650, but never ratified by the British, stated the western boundary of Connecticut ran north from Greenwich Bay for a distance of , "provided the said line come not within of Hudson River". This agreement was observed by both sides until war erupted between England and The Netherlands in 1652. Conflict continued concerning colonial limits until the Duke of York captured New Netherland in 1664.
Most Colonial royal grants were for long east–west strips. Connecticut took its grant seriously and established a ninth county between the Susquehanna River and Delaware River named Westmoreland County. This resulted in the brief Pennamite-Yankee Wars with Pennsylvania.
Yale College was established in 1701, providing Connecticut with an important institution to educate clergy and civil leaders. The Congregational church dominated religious life in the colony and, by extension, town affairs in many parts.
With more than of coastline including along its navigable rivers, Connecticut developed during its colonial years the antecedents of a maritime tradition that would later produce booms in shipbuilding, marine transport, naval support, seafood production, and leisure boating.
Historical records list the "Tryall" as the first vessel built in Connecticut Colony, in 1649 at a site on the Connecticut River in present-day Wethersfield. In the two decades leading up to 1776 and the American Revolution, Connecticut boatyards launched about 100 sloops, schooners and brigs according to a database of U.S. customs records maintained online by the Mystic Seaport Museum, the largest being the 180-ton "Patient Mary" launched in New Haven in 1763. Connecticut's first lighthouse was constructed in 1760 at the mouth of the Thames River with the New London Harbor Lighthouse.
American Revolution.
Connecticut designated four delegates to the Second Continental Congress who signed the Declaration of Independence: Samuel Huntington, Roger Sherman, William Williams, and Oliver Wolcott. Connecticut's legislature authorized the outfitting of six new regiments in 1775, in the wake of the clashes between British regulars and Massachusetts militia at Lexington and Concord. There were some 1,200 Connecticut troops on hand at the Battle of Bunker Hill in June 1775. In 1775, David Bushnell invented the "Turtle" which the following year launched the first submarine attack in history, unsuccessfully against a British warship at anchor in New York Harbor.
In 1777, the British got word of Continental Army supplies in Danbury, and they landed an expeditionary force of some 2,000 troops in Westport. This force then marched to Danbury and destroyed homes and much of the depot. Continental Army troops and militia led by General David Wooster and General Benedict Arnold engaged them on their return march at Ridgefield in 1777. For the winter of 1778–79, General George Washington decided to split the Continental Army into three divisions encircling New York City, where British General Sir Henry Clinton had taken up winter quarters. Major General Israel Putnam chose Redding as the winter encampment quarters for some 3,000 regulars and militia under his command. The Redding encampment allowed Putnam's soldiers to guard the replenished supply depot in Danbury and to support any operations along Long Island Sound and the Hudson River Valley. Some of the men were veterans of the winter encampment at Valley Forge, Pennsylvania, the previous winter. Soldiers at the Redding camp endured supply shortages, cold temperatures, and significant snow, with some historians dubbing the encampment "Connecticut's Valley Forge".
The state was also the launching site for a number of raids against Long Island orchestrated by Samuel Holden Parsons and Benjamin Tallmadge, and provided soldiers and material for the war effort, especially to Washington's army outside New York City. General William Tryon raided the Connecticut coast in July 1779, focusing on New Haven, Norwalk, and Fairfield. New London and Groton Heights were raided in September 1781 by Benedict Arnold, who had turned traitor to the British.
At the outset of the American Revolution, the Continental Congress assigned Nathaniel Shaw Jr. of New London as its naval agent in charge of recruiting privateers to seize British vessels as opportunities presented, with nearly 50 operating out of the Thames River which eventually drew the reprisal from the British force led by Arnold.
Early statehood.
Early national period and industrial revolution.
Connecticut ratified the U.S. Constitution on January 9, 1788, becoming the fifth state.
The state prospered during the era following the American Revolution, as mills and textile factories were built and seaports flourished from trade and fisheries. After Congress established in 1790 the predecessor to the U.S. Revenue Cutter Service that would evolve into the U.S. Coast Guard, President Washington assigned Jonathan Maltbie as one of seven masters to enforce customs regulations, with Maltbie monitoring the southern New England coast with a 48-foot cutter sloop named "Argus".
In 1786, Connecticut ceded territory to the U.S. government that became part of the Northwest Territory. The state retained land extending across the northern part of present-day Ohio called the Connecticut Western Reserve. The Western Reserve section was settled largely by people from Connecticut, and they brought Connecticut place names to Ohio.
Connecticut made agreements with Pennsylvania and New York which extinguished the land claims within those states' boundaries and created the Connecticut Panhandle. The state then ceded the Western Reserve in 1800 to the federal government, which brought it to its present boundaries (other than minor adjustments with Massachusetts).
19th century.
For the first time in 1800, Connecticut shipwrights launched more than 100 vessels in a single year. Over the following decade to the doorstep of renewed hostilities with Britain that sparked the War of 1812, Connecticut boatyards constructed close to 1,000 vessels, the most productive stretch of any decade in the 19th century.
During the war, the British launched raids in Stonington and Essex and blockaded vessels in the Thames River. Derby native Isaac Hull became Connecticut's best-known naval figure to win renown during the conflict, as captain of the .
The British blockade during the War of 1812 hurt exports and bolstered the influence of Federalists who opposed the war. The cessation of imports from Britain stimulated the construction of factories to manufacture textiles and machinery. Connecticut came to be recognized as a major center for manufacturing, due in part to the inventions of Eli Whitney and other early innovators of the Industrial Revolution.
The war led to the development of fast clippers that helped extend the reach of New England merchants to the Pacific and Indian oceans. The first half of the 19th century saw as well a rapid rise in whaling, with New London emerging as one of the New England industry's three biggest home ports after Nantucket and New Bedford.
The state was known for its political conservatism, typified by its Federalist party and the Yale College of Timothy Dwight. The foremost intellectuals were Dwight and Noah Webster, who compiled his great dictionary in New Haven. Religious tensions polarized the state, as the Congregational Church struggled to maintain traditional viewpoints, in alliance with the Federalists. The failure of the Hartford Convention in 1814 hurt the Federalist cause, with the Democratic-Republican Party gaining control in 1817.
Connecticut had been governed under the "Fundamental Orders" since 1639, but the state adopted a new constitution in 1818.
Civil War era.
Connecticut manufacturers played a major role in supplying the Union forces with weapons and supplies during the Civil War. The state furnished 55,000 men, formed into thirty full regiments of infantry, including two in the U.S. Colored Troops, with several Connecticut men becoming generals. The Navy attracted 250 officers and 2,100 men, and Glastonbury native Gideon Welles was Secretary of the Navy. James H. Ward of Hartford was the first U.S. Naval Officer killed in the Civil War. Connecticut casualties included 2,088 killed in combat, 2,801 dying from disease, and 689 dying in Confederate prison camps.
A surge of national unity in 1861 brought thousands flocking to the colors from every town and city. However, as the war became a crusade to end slavery, many Democrats (especially Irish Catholics) pulled back. The Democrats took a pro-slavery position and included many Copperheads willing to let the South secede. The intensely fought 1863 election for governor was narrowly won by the Republicans.
Second industrial revolution.
Connecticut's extensive industry, dense population, flat terrain, and wealth encouraged the construction of railroads starting in 1839. By 1840, of line were in operation, growing to in 1850 and in 1860.
The New York, New Haven and Hartford Railroad, called the "New Haven" or "The Consolidated", became the dominant Connecticut railroad company after 1872. J. P. Morgan began financing the major New England railroads in the 1890s, dividing territory so that they would not compete. The New Haven purchased 50 smaller companies, including steamship lines, and built a network of light rails (electrified trolleys) that provided inter-urban transportation for all of southern New England. By 1912, the New Haven operated over of track with 120,000 employees.
As steam-powered passenger ships proliferated after the Civil War, Noank would produce the two largest built in Connecticut during the 19th century, with the 332-foot wooden steam paddle wheeler "Rhode Island" launched in 1882, and the 345-foot paddle wheeler "Connecticut" seven years later. Connecticut shipyards would launch more than 165 steam-powered vessels in the 19th century.
In 1875, the first telephone exchange in the world was established in New Haven.
20th century.
World War I.
When World War I broke out in 1914, Connecticut became a major supplier of weaponry to the U.S. military; by 1918, 80% of the state's industries were producing goods for the war effort. Remington Arms in Bridgeport produced half the small-arms cartridges used by the U.S. Army, with other major suppliers including Winchester in New Haven and Colt in Hartford.
Connecticut was also an important U.S. Navy supplier, with Electric Boat receiving orders for 85 submarines, Lake Torpedo Boat building more than 20 subs, and the Groton Iron Works building freighters. On June 21, 1916, the Navy made Groton the site for its East Coast submarine base and school.
The state enthusiastically supported the American war effort in 1917 and 1918 with large purchases of war bonds, a further expansion of industry, and an emphasis on increasing food production on the farms. Thousands of state, local, and volunteer groups mobilized for the war effort and were coordinated by the Connecticut State Council of Defense. Manufacturers wrestled with manpower shortages; Waterbury's American Brass and Manufacturing Company was running at half capacity, so the federal government agreed to furlough soldiers to work there.
Interwar period.
In 1919, J. Henry Roraback started the Connecticut Light & Power Co. which became the state's dominant electric utility. In 1925, Frederick Rentschler spurred the creation of Pratt & Whitney in Hartford to develop engines for aircraft; the company became an important military supplier in World WarII and one of the three major manufacturers of jet engines in the world.
On September 21, 1938, the most destructive storm in New England history struck eastern Connecticut, killing hundreds of people. The eye of the "Long Island Express" passed just west of New Haven and devastated the Connecticut shoreline between Old Saybrook and Stonington from the full force of wind and waves, even though they had partial protection by Long Island. The hurricane caused extensive damage to infrastructure, homes, and businesses. In New London, a sailing ship was driven into a warehouse complex, causing a major fire. Heavy rainfall caused the Connecticut River to flood downtown Hartford and East Hartford. An estimated 50,000 trees fell onto roadways.
World War II.
The advent of lend-lease in support of Britain helped lift Connecticut from the Great Depression, with the state a major production center for weaponry and supplies used in World WarII. Connecticut manufactured 4.1% of total U.S. military armaments produced during the war, ranking ninth among the 48 states, with major factories including Colt for firearms, Pratt & Whitney for aircraft engines, Chance Vought for fighter planes, Hamilton Standard for propellers, and Electric Boat for submarines and PT boats.
On May 13, 1940, Igor Sikorsky made an untethered flight of the first practical helicopter. The helicopter saw limited use in World War II, but future military production made Sikorsky Aircraft's Stratford plant Connecticut's largest single manufacturing site by the start of the 21st century.
Post-World War II economic expansion.
Connecticut lost some wartime factories following the end of hostilities, but the state shared in a general post-war expansion that included the construction of highways and resulting in middle-class growth in suburban areas.
Prescott Bush represented Connecticut in the U.S. Senate from 1952 to 1963; his son George H. W. Bush and grandson George W. Bush both became presidents of the United States. In 1965, Connecticut ratified its current constitution, replacing the document that had served since 1818.
In 1968, commercial operation began for the Connecticut Yankee Nuclear Power Plant in Haddam; in 1970, the Millstone Nuclear Power Station began operations in Waterford. In 1974, Connecticut elected Democratic Governor Ella T. Grasso, who became the first woman in any state to be elected governor without being the wife or widow of a previous governor.
Late 20th century.
Connecticut's dependence on the defense industry posed an economic challenge at the end of the Cold War. The resulting budget crisis helped elect Lowell Weicker as governor on a third-party ticket in 1990. Weicker's remedy was a state income tax which proved effective in balancing the budget, but only for the short-term. He did not run for a second term, in part because of this politically unpopular move.
In 1992, initial construction was completed on Foxwoods Casino at the Mashantucket Pequots reservation in eastern Connecticut, which became the largest casino in the Western Hemisphere. Mohegan Sun followed four years later.
Early 21st century.
In 2000, presidential candidate Al Gore chose Senator Joe Lieberman as his running mate, marking the first time that a major party presidential ticket included someone of the Jewish faith. Gore and Lieberman fell five votes short of George W. Bush and Dick Cheney in the Electoral College. In the terrorist attacks of September 11, 2001, 65 state residents were killed, mostly Fairfield County residents who were working in the World Trade Center. In 2004, Republican Governor John G. Rowland resigned during a corruption investigation, later pleading guilty to federal charges.
Connecticut was hit by three major storms in just over 14 months in 2011 and 2012, with all three causing extensive property damage and electric outages. Hurricane Irene struck Connecticut August 28, and damage totaled $235 million. Two months later, the "Halloween nor'easter" dropped extensive snow onto trees, resulting in snapped branches and trunks that damaged power lines; some areas were without electricity for 11 days. Hurricane Sandy hit New Jersey and passed over Connecticut with hurricane-force winds and tides up to 12 feet above normal. Many coastal buildings were damaged or destroyed. Sandy's winds drove storm surges into streets and cut power to 98% of homes and businesses, with more than $360 million in damage.
On December 14, 2012, Adam Lanza shot and killed 26 people at Sandy Hook Elementary School in Newtown, and then killed himself. The massacre spurred renewed efforts by activists for tighter laws on gun ownership nationally.
In the summer and fall of 2016, Connecticut experienced a drought in many parts of the state, causing some water-use bans. As of , 45% of the state was listed at Severe Drought by the U.S. Drought Monitor, including almost all of Hartford and Litchfield counties. All the rest of the state was in Moderate Drought or Severe Drought, including Middlesex, Fairfield, New London, New Haven, Windham, and Tolland counties. This affected the agricultural economy in the state.
Geography.
Connecticut is bordered on the south by Long Island Sound, on the west by New York, on the north by Massachusetts, and on the east by Rhode Island. The state capital and fourth largest city is Hartford, and other major cities and towns (by population) include Bridgeport, New Haven, Stamford, Waterbury, Norwalk, Danbury, New Britain, Greenwich, and Bristol. There are 169 incorporated towns in Connecticut, with cities and villages included within some towns.
The highest peak in Connecticut is Bear Mountain in Salisbury in the northwest corner of the state. The highest point is just east of where Connecticut, Massachusetts, and New York meet (42°3′ N, 73°29′ W), on the southern slope of Mount Frissell, whose peak lies nearby in Massachusetts. At the opposite extreme, many of the coastal towns have areas that are less than above sea level.
Connecticut has a long maritime history and a reputation based on that history—yet the state has no direct oceanfront (technically speaking). The coast of Connecticut sits on Long Island Sound, which is an estuary. The state's access to the open Atlantic Ocean is both to the west (toward New York City) and to the east (toward the "race" near Rhode Island). Due to this unique geography, Long Island Sound and the Connecticut shoreline are relatively protected from high waves from storms.
The Connecticut River cuts through the center of the state, flowing into Long Island Sound. The most populous metropolitan region centered within the state lies in the Connecticut River Valley. Despite Connecticut's relatively small size, it features wide regional variations in its landscape; for example, in the northwestern Litchfield Hills, it features rolling mountains and horse farms, whereas in areas to the east of New Haven along the coast, the landscape features coastal marshes, beaches, and large scale maritime activities.
Connecticut's rural areas and small towns in the northeast and northwest corners of the state contrast sharply with its industrial cities such as Stamford, Bridgeport, and New Haven, located along the coastal highways from the New York border to New London, then northward up the Connecticut River to Hartford. Many towns in northeastern and northwestern Connecticut center around a green. Near the green typically stand historical visual symbols of New England towns, such as a white church, a colonial meeting house, a colonial tavern or inn, several colonial houses, and so on, establishing a scenic historical appearance maintained for both historic preservation and tourism. Many of the areas in southern and coastal Connecticut have been built up and rebuilt over the years, and look less visually like traditional New England.
The northern boundary of the state with Massachusetts is marked by the Southwick Jog or Granby Notch, an approximately square detour into Connecticut. The origin of this anomaly is clearly established in a long line of disputes and temporary agreements which were finally concluded in 1804, when southern Southwick's residents sought to leave Massachusetts, and the town was split in half.
The southwestern border of Connecticut where it abuts New York State is marked by a panhandle in Fairfield County and the Western Connecticut Planning Region, containing the towns of Greenwich, Stamford, New Canaan, Darien, and parts of Norwalk and Wilton. This irregularity in the boundary is the result of territorial disputes in the late 17th century, culminating with New York giving up its claim to the area, whose residents considered themselves part of Connecticut, in exchange for an equivalent area extending northwards from Ridgefield to the Massachusetts border, as well as undisputed claim to Rye, New York.
Areas maintained by the National Park Service include Appalachian National Scenic Trail, Quinebaug and Shetucket Rivers Valley National Heritage Corridor, and Weir Farm National Historic Site.
Climate.
Connecticut lies at the rough transition zone between the southern end of the humid continental climate, and the northern portion of the humid subtropical climate. Northern Connecticut generally experiences a climate with hot, humid summers and moderataely cold winters with periodic snowfall. Far southern and coastal Connecticut has a climate with cool winters with a mix of rain and infrequent snow, and the long hot and humid summers typical of the middle and lower East Coast. Coastal Connecticut is the very broad transition zone between the humid continental climate and the humid subtropical climate.
Precipitation.
Connecticut sees a fairly even precipitation pattern spread throughout the 12 months. Connecticut averages 56% of possible sunshine (higher than the U.S. national average), averaging 2,400 hours of sunshine annually. Occasionally, some months may see extremes in precipitation, either much higher or lower than normal, though long term droughts and floods are rare.
Early spring can range from slightly cool (40s to low 50s F) to warm (65 to 70 F), while mid and late spring (late April/May) is warm. By late May, the building Bermuda High creates a southerly flow of warm and humid tropical air, bringing hot weather conditions throughout the state. Average highs are in New London and in Windsor Locks at the peak of summer in late July. On occasion, heat waves with highs from 90 to occur across Connecticut. Connecticut's record high temperature is which occurred in Danbury on July 15, 1995. Although summers are sunny in Connecticut, quick moving summer thunderstorms can bring brief downpours with thunder and lightning. Occasionally these thunderstorms can be severe, and the state usually averages one tornado per year. During hurricane season, the remains of tropical cyclones occasionally affect the region, though a direct hit is rare. Some notable hurricanes to impact the state include the 1938 New England hurricane, Hurricane Carol in 1954, Hurricane Sandy in 2012, and Hurricane Isaias in 2020.
Weather commonly associated with the fall season typically begins in October and lasts to the first days of December. Daily high temperatures in October and November range from the 50s to 60s F. Winters (December through mid-March) are moderately generally cold from south to north in Connecticut. The coldest month (January) has average high temperatures ranging from in the coastal lowlands to in the inland and northern portions on the state.
The lowest temperature recorded in Connecticut is which has been observed twice: in Falls Village on February 16, 1943, and in Coventry on January 22, 1961. The average yearly snowfall ranges from about in the higher elevations of the northern portion of the state to only along the southeast coast of Connecticut (Branford to Groton). Most of Connecticut has less than 60 days of snow cover, while coastal areas often only see 30 days or so of snowcover. Annually, 95% of seasonal snowfall in Connecticut falls from early December to late March. On occasion in winter, Connecticut can occasionally get heavy snowstorms, called nor'easters, which may produce as much as two feet of snow on rare occasions. Although rare, Ice storms also occur on occasion, such as the Southern New England ice storm of 1973.
Flora.
Forests consist of a mix of Northeastern coastal forests of oak in southern areas of the state, to the upland New England-Acadian forests in the northwestern parts of the state. Mountain Laurel ("Kalmia latifolia") is the state flower and is native to low ridges in several parts of Connecticut. Rosebay rhododendron ("Rhododendron maximum") is also native to eastern uplands of Connecticut and Pachaug State Forest is home to the Rhododendron Sanctuary Trail. Atlantic white cedar ("Chamaecyparis thyoides"), is found in wetlands in the southern parts of the state. Connecticut has one native cactus ("Opuntia humifusa"), found in sandy coastal areas and low hillsides. Several types of beach grasses and wildflowers are also native to Connecticut. Connecticut spans USDA Plant Hardiness Zones 5b to 7a. Coastal Connecticut is the broad transition zone where more southern and subtropical plants are cultivated.
Demographics.
As of the 2020 United States census, Connecticut has a population of 3,605,944, an increase of 31,847 people (0.9%) from the 2010 United States census. Among the census records, 20.4% of the population was under 18.
In 1790, 97% of the population in Connecticut was classified as "rural". The first census in which less than half the population was classified as rural was 1890. In the 2000 census, only 12.3% was considered rural. Most of western and southern Connecticut (particularly the Gold Coast) is strongly associated with New York City; this area is the most affluent and populous region of the state and has high property costs and high incomes. The center of population of Connecticut is located in the town of Cheshire.
According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 2,930 homeless people in Connecticut.
In common with the majority of the United States, non-Hispanic whites have remained the dominant racial and ethnic group in Connecticut. From being 98% of the population in 1940, however, they have declined to 63% of the population as of the 2020 census. These statistics have represented fewer Americans identifying as non-Hispanic white, which has given rise to the Hispanic and Latino American population and Asian American population overall. , 46.1% of Connecticut's population younger than age1 were minorities. As of 2004, 11.4% of the population (400,000) was foreign-born. In 1870, native-born Americans had accounted for 75% of the state's population, but that had dropped to 35% by 1918. Also as of 2000, 81.69% of Connecticut residents age5 and older spoke English at home and 8.42% spoke Spanish, followed by Italian at 1.59%, French at 1.31%, and Polish at 1.20%.
The largest ancestry groups since 2010 were: 19.3% Italian, 17.9% Irish, 10.7% English, 10.4% German, 8.6% Polish, 6.6% French, 3.0% French Canadian, 2.7% American, 2.0% Scottish, and 1.4% Scotch Irish.
Connecticut is one of the most racially segregated states in the nation, with the nonwhite population largely concentrated in major urban areas such as Bridgeport, Hartford, New Haven and Waterbury. In many cases, towns neighboring urban areas are sharply segregated from them.
The top countries of origin for Connecticut's immigrants in 2018 were India, Jamaica, the Dominican Republic, Poland and Ecuador.
Birth data.
"Note: Births in table do not add up because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number."
Religion.
A 2014 Pew survey of Connecticut residents' religious self-identification showed the following distribution of affiliations Protestant 35%, Roman Catholic 33%, non-religious 28%, Jewish 3%, Mormonism 1%, Orthodox 1%, Jehovah's Witness 1%, Hinduism 1%, Buddhism 1% and Islam 1%. Jewish congregations had 108,280 (3.2%) members in 2000.
The Jewish population is concentrated in the towns near Long Island Sound between Greenwich and New Haven, in Greater New Haven and in Greater Hartford, especially the suburb of West Hartford. According to the Association of Religion Data Archives, the largest Christian denominations, by number of adherents, in 2010 were: the Catholic Church, with 1,252,936; the United Church of Christ, with 96,506; and non-denominational Evangelical Protestants, with 72,863.
Recent immigration has brought other non-Christian religions to the state, but the numbers of adherents of other religions are still low. Connecticut is also home to New England's largest Protestant church: The First Cathedral in Bloomfield, Connecticut. Hartford is seat to the Roman Catholic Archdiocese of Hartford, which is sovereign over the Diocese of Bridgeport and the Diocese of Norwich.
By a 2020 Public Religion Research Institute survey, 71% of the population identified as some form of Christian. It found the state to be 21% non-religious and specifically 19% white mainline Protestant, 19% white Catholic, 9% white evangelical Protestant, 7% black Protestant, and 7% Hispanic Catholic. In contrast to the 2014 Pew survey, the 2020 PRRI survey found Connecticut to be 40% Protestant and 28% Catholic (with the remainder of Christians being Mormon at 2%, and Orthodox at 1%). The PRRI survey found Jewish citizens to be 2% of the population and, like the Pew survey: Hindus, Buddhists, and Muslims to be 1% each.
Economy.
The total 2023 gross state product for Connecticut was $345.9 billion, up from $321.7 billion in 2022.
Connecticut's adjusted per capita personal income in 2022 was estimated at $77,940, third-highest among states. There is a large disparity in incomes throughout the state; Connecticut was tied with California and Massachusetts for the second highest (after New York's 0.52) Gini coefficient, at 0.50, as of 2020. As of 2025, it remained tied for second with Louisiana, with only New York having higher levels of inequality. Despite its overall inequality, Connecticut has a relatively low poverty rate. According to a 2018 study by Phoenix Marketing International, Connecticut had the third-largest number of millionaires per capita in the United States, with a ratio of 7.75%. New Canaan is the wealthiest town in Connecticut, with a per capita income of $105,846. Hartford is the poorest municipality in Connecticut, with a per capita income of $16,798 in 2020. At the county level, per capita income ranged from $48,295 in Fairfield County to $26,585 in Windham County, which is close to the United States average.
As of December 2019, Connecticut's seasonally adjusted unemployment rate was 3.8%, with U.S. unemployment at 3.5% that month. Dating back to 1982, Connecticut recorded its lowest unemployment in 2000 between August and October, at 2.2%. The highest unemployment rate during that period occurred in November and December 2010 at 9.3%, but economists expected record new levels of layoffs as a result of business closures in the spring of 2020 due to the coronavirus pandemic.
Taxation.
Tax is collected by the Connecticut Department of Revenue Services and by local municipalities.
As of 2012, Connecticut residents had the second highest rate in the nation of combined state and local taxes after New York, at 12.6% of income compared to the national average of 9.9% as reported by the Tax Foundation.
Before 1991, Connecticut had an investment-only income tax system. Income from employment was untaxed, but income from investments was taxed at 13%, the highest rate in the U.S., with no deductions allowed for costs of producing the investment income, such as interest on borrowing.
In 1991, under Governor Lowell P. Weicker Jr., an independent, the system was changed to one in which the taxes on employment income and investment income were equalized at a maximum rate of 4%. The new tax policy drew investment firms to Connecticut; , Fairfield County was home to the headquarters for 16 of the 200 largest hedge funds in the world.
, the income tax rates on Connecticut individuals were divided into seven tax brackets of 3% (on income up to $10,000); 5% ($10,000–$50,000); 5.5% ($50,000–$100,000); 6% ($100,000–$200,000); 6.5% ($200,000–$250,000); 6.9% ($250,000–$500,000); and 6.99% above $500,000, with additional amounts owed depending on the bracket.
All wages of Connecticut residents are subject to the state's income tax, even if earned outside the state. However, in those cases, Connecticut income tax must be withheld only to the extent the Connecticut tax exceeds the amount withheld by the other jurisdiction. Since New York has higher income tax rates than Connecticut, this effectively means that Connecticut residents who work in New York have no Connecticut income tax withheld. Connecticut permits a credit for taxes paid to other jurisdictions, but since residents who work in other states are still subject to Connecticut income taxation, they may owe taxes if the jurisdictional credit does not fully offset the Connecticut tax amount.
Connecticut levies a 6.35% state sales tax on the retail sale, lease, or rental of most goods. Some items and services in general are not subject to sales and use taxes unless specifically enumerated as taxable by statute. A provision excluding clothing under $50 from sales tax was repealed . There are no additional sales taxes imposed by local jurisdictions. In 2001, Connecticut instituted what became an annual sales tax "holiday" each August lasting one week, when retailers do not have to remit sales tax on certain items and quantities of clothing that has varied from year to year.
State law authorizes municipalities to tax property, including real estate, vehicles and other personal property, with state statute providing varying exemptions, credits and abatements. All assessments are at 70% of fair market value. The maximum property tax credit is $200 per return and any excess may not be refunded or carried forward. According to the Tax Foundation, on a per capita basis in the 2017 fiscal year Connecticut residents paid the 3rd highest average property taxes in the nation after New Hampshire and New Jersey.
, gasoline taxes and fees in Connecticut were 40.13 cents per gallon, 11th highest in the United States which had a nationwide average of 36.13 cents a gallon excluding federal taxes. Diesel taxes and fees as of January 2020 in Connecticut were 46.50 cents per gallon, ninth highest nationally with the U.S. average at 37.91 cents.
Real estate.
In 2019, sales of single-family homes in Connecticut totaled 33,146 units, a 2.1 percent decline from the 2018 transaction total. The median home sold in 2019 recorded a transaction amount of $260,000, up 0.4 percent from 2018.
Connecticut had the seventh highest rate of home foreclosure activity in the country in 2019 at 0.53 percent of the total housing stock.
Industries.
Finance, insurance and real estate was Connecticut's largest industry in 2018 as ranked by gross domestic product, generating $75.7 billion in GDP that year. Major employers include The Hartford, Travelers, Harman International, Cigna, the Aetna subsidiary of CVS Health, Mass Mutual, People's United Financial, Bank of America, Realogy, Bridgewater Associates, GE Capital, William Raveis Real Estate, and Berkshire Hathaway through reinsurance and residential real estate subsidiaries.
The combined educational, health and social services sector was the largest single industry as ranked by employment, with a combined workforce of 342,600 people at the end of 2019, ranking fourth the year before in GDP at $28.3 billion.
The broad business and professional services sector had the second highest GDP total in Connecticut in 2018 at an estimated $33.7 billion.
Manufacturing was the third biggest industry in 2018 with GDP of $30.8 billion, dominated by Raytheon Technologies formed in the March 2020 merger of Hartford-based United Technologies and Waltham, Mass.-based Raytheon Co. As of the merger, Raytheon Technologies employed about 19,000 people in Connecticut through subsidiaries Pratt & Whitney and Collins Aerospace. Lockheed Martin subsidiary Sikorsky Aircraft operates Connecticut's single largest manufacturing plant in Stratford, where it makes helicopters.
Major audio equipment manufacturing company Harman International is headquartered in Stamford, Connecticut. It owns many brands like JBL, Akg and Harman kardon.
Other major manufacturers include the Electric Boat division of General Dynamics, which makes submarines in Groton.
Connecticut historically was a center of gun manufacturing, and four gun-manufacturing firms continued to operate in the state , employing 2,000 people: Colt, Stag, Ruger, and Mossberg. Marlin, owned by Remington, closed in April 2011.
Other large components of the Connecticut economy in 2018 included wholesale trade ($18.1 billion in GDP); information services ($13.8 billion); retail ($13.7 billion); arts, entertainment and food services ($9.1 billion); and construction ($8.3 billion).
Tourists spent $9.3 billion in Connecticut in 2017 according to estimates as part of a series of studies commissioned by the state of Connecticut. Foxwoods Resort Casino and Mohegan Sun are the two biggest tourist draws and number among the state's largest employers; both are located on Native American reservations in the southeastern Connecticut.
Connecticut's agricultural production totaled $580 million in 2017, with just over half of that revenue the result of nursery stock production. Milk production totaled $81 million that year, with other major product categories including eggs, vegetables and fruit, tobacco and shellfish.
Energy.
Connecticut's economy uses less energy to produce each dollar of GDP than all other states except California, Massachusetts, and New York. It uses less energy on a per-capita basis than all but six other states. It has no fossil-fuel resources, but does have renewable resources. Average retail electricity prices are the highest among the 48 contiguous states. While most of the state's energy consumption is generated using fossil fuels, nuclear power delivered over 40% of state's electricity generation in 2019. Refuse-derived fuels and other biomass provided the largest share of renewable electricity at about a 3% share. Solar and wind generation have grown in recent years. More than three-quarters of solar generation came from distributed small-scale installations such as rooftop solar in 2019, and there is planning underway to significantly increase renewable generation with the state's offshore wind resource.
Transport.
Roads.
The Interstate highways in the state are Interstate 95 (I-95) traveling southwest to northeast along the coast, I-84 traveling southwest to northeast in the center of the state, I-91 traveling north to south in the center of the state, and I-395 traveling north to south near the eastern border of the state. The other major highways in Connecticut are the Merritt Parkway and Wilbur Cross Parkway, which together form Connecticut Route 15 (Route 15), traveling from the Hutchinson River Parkway in New York parallel to I-95 before turning north of New Haven and traveling parallel to I-91, finally becoming a surface road in Berlin. I-95 and Route 15 were originally toll roads; they relied on a system of toll plazas at which all traffic stopped and paid fixed tolls. A series of major crashes at these plazas eventually contributed to the decision to remove the tolls in 1988. Other major arteries in the state include U.S. Route7 (US7) in the west traveling parallel to the New York state line, Route8 farther east near the industrial city of Waterbury and traveling north–south along the Naugatuck River Valley nearly parallel with US7, and Route9 in the east.
Between New Haven and New York City, I-95 is one of the most congested highways in the United States. Although I-95 has been widened in several spots, some areas are only three lanes and this strains traffic capacity, resulting in frequent and lengthy rush hour delays. Frequently, the congestion spills over to clog the parallel Merritt Parkway and even US1. The state has encouraged traffic reduction schemes, including rail use and ride-sharing.
Connecticut also has a very active bicycling community, with one of the highest rates of bicycle ownership and use in the United States, particularly in New Haven. According to the U.S. Census 2006 American Community Survey, New Haven has the highest percentage of commuters who bicycle to work of any major metropolitan center on the East Coast.
Rail.
Rail is a popular travel mode between New Haven and New York City's Grand Central Terminal. Southwestern Connecticut is served by the Metro-North Railroad's New Haven Line, operated by the Metropolitan Transportation Authority. Metro-North provides commuter service between New York City and New Haven, with branches to New Canaan, Danbury, and Waterbury. Connecticut lies along Amtrak's Northeast Corridor, which features frequent Northeast Regional and Acela Express service from New Haven south to New York City, Philadelphia, Baltimore, Washington, D.C., and Norfolk, VA, as well as north to New London, Providence and Boston. Since 1990, coastal cities and towns between New Haven and New London are also served by the Shore Line East commuter line.
In June 2018, a commuter rail service called the Hartford Line began operating between New Haven and Springfield on Amtrak's New Haven-Springfield Line. Hartford Line service is provided by both Amtrak and the Connecticut Department of Transportation's CT Rail, and in addition to its termini serves New Haven State Street, Wallingford, Meriden, Berlin, Hartford, Windsor, and Windsor Locks. Several infill stations are planned to be added in the near future as of 2021. Amtrak's Vermonter runs from Washington to St. Albans, Vermont via the same line. In July 2019, Amtrak launched the "Valley Flyer", which runs between New Haven and Greenfield, Massachusetts.
A proposed commuter rail service, the Central Corridor Rail Line, would connect New London with Norwich, Willimantic, Storrs via the main campus of the University of Connecticut, and Stafford Springs, with service continuing into Massachusetts and Brattleboro, Vermont. The proposal also adds stops to service popular tourist destinations Foxwoods Resort Casino and Mohegan Sun.
Bus.
Statewide bus service is supplied by Connecticut Transit, owned by the Connecticut Department of Transportation, with smaller municipal authorities providing local service. Bus networks are an important part of the transportation system in Connecticut, especially in urban areas like Hartford, Stamford, Norwalk, Bridgeport and New Haven. Connecticut Transit also operates CTfastrak, a bus rapid transit service between New Britain and Hartford, which opened to the public on March 28, 2015.
Air.
Connecticut's largest airport is Bradley International Airport in Windsor Locks, north of Hartford. Many residents of central and southern Connecticut also make heavy use of JFK International Airport and Newark International Airports, especially for international travel. Smaller regional air service is provided at Tweed New Haven Regional Airport. Larger civil airports include Danbury Municipal Airport and Waterbury-Oxford Airport in western Connecticut, Hartford–Brainard Airport in central Connecticut, and Groton-New London Airport in eastern Connecticut. Sikorsky Memorial Airport is located in Stratford and mostly services cargo, helicopter and private aviation.
Ferry.
Several ferry services cross Long Island Sound and connect the state to Long Island. The Bridgeport & Port Jefferson Ferry travels between Bridgeport, Connecticut, and Port Jefferson, New York. Ferry service also operates out of New London to Orient, New York; Fishers Island, New York; and Block Island, Rhode Island, which are popular tourist destinations. Two ferries cross the Connecticut River: the Rocky Hill–Glastonbury ferry and the Chester–Hadlyme ferry, the former of which is the oldest continuously operating ferry in the United States, operating since 1655.
Law and government.
Hartford has been the sole capital of Connecticut since 1875. Before then, New Haven and Hartford alternated as dual capitals.
Constitutional history.
Connecticut is known as the "Constitution State". The origin of this nickname is uncertain, but it likely comes from Connecticut's pivotal role in the federal constitutional convention of 1787, during which Roger Sherman and Oliver Ellsworth helped to orchestrate what became known as the Connecticut Compromise, or the Great Compromise. This plan combined the Virginia Plan and the New Jersey Plan to form a bicameral legislature, a form copied by almost every state constitution since the adoption of the federal constitution. Variations of the bicameral legislature had been proposed by Virginia and New Jersey, but Connecticut's plan was the one that was in effect until the early 20th century, when Senators ceased to be selected by their state legislatures and were instead directly elected. Otherwise, it is still the design of Congress.
The nickname also might refer to the Fundamental Orders of 1638–39. These Fundamental Orders represent the framework for the first formal Connecticut state government written by a representative body in Connecticut. The State of Connecticut government has operated under the direction of four separate documents in the course of the state's constitutional history. After the Fundamental Orders, Connecticut was granted governmental authority by King Charles II of England through the Connecticut Charter of 1662.
Separate branches of government did not exist during this period, and the General Assembly acted as the supreme authority. A constitution similar to the modern U.S. Constitution was not adopted in Connecticut until 1818. Finally, the current state constitution was implemented in 1965. The 1965 constitution absorbed a majority of its 1818 predecessor, but incorporated a handful of important modifications.
Executive.
The governor heads the executive branch. , Ned Lamont is the Governor and Susan Bysiewicz is the Lieutenant Governor; both are Democrats. From 1639 until the adoption of the 1818 constitution, the governor presided over the General Assembly. In 1974, Ella Grasso was elected as the governor of Connecticut. This was the first time in United States history when a woman was a governor without her husband being governor first.
There are several executive departments: Administrative Services, Agriculture, Banking, Children and Families, Consumer Protection, Correction, Economic and Community Development, Developmental Services, Construction Services, Education, Emergency Management and Public Protection, Energy & Environmental Protection, Higher Education, Insurance, Labor, Mental Health and Addiction Services, Military, Motor Vehicles, Public Health, Public Utility Regulatory Authority, Public Works, Revenue Services, Social Services, Transportation, and Veterans Affairs. In addition to these departments, there are other independent bureaus, offices and commissions.
In addition to the governor and lieutenant governor, there are four other executive officers named in the state constitution that are elected directly by voters: secretary of the state, treasurer, comptroller, and attorney general. All executive officers are elected to four-year terms.
Legislative.
Connecticut's legislative branch is known as the General Assembly. It is a bicameral legislature consisting of an upper body, the State Senate (36 senators); and a lower body, the House of Representatives (151 representatives). Bills must pass each house in order to become law. The governor can veto bills, but this veto can be overridden by a two-thirds majority in both houses. Per Article XV of the state constitution, Senators and Representatives must be at least 18 years of age and are elected to two-year terms in November on even-numbered years. There also must always be between 30 and 50 senators and 125 to 225 representatives. The Lieutenant Governor presides over the Senate, except when absent from the chamber, when the President pro tempore presides. The Speaker of the House presides over the House. , Matthew Ritter is the Speaker of the House of Connecticut.
, Connecticut's United States Senators are Richard Blumenthal (Democrat) and Chris Murphy (Democrat). Connecticut has five representatives in the U.S. House, all of whom are Democrats.
Locally elected representatives also develop local ordinances to govern cities and towns. The town ordinances often include noise control and zoning guidelines. However, the State of Connecticut also provides statewide ordinances for noise control as well.
Judicial.
The highest court of Connecticut's judicial branch is the Connecticut Supreme Court, headed by the Chief Justice of Connecticut. The Supreme Court is responsible for deciding on the constitutionality of laws, or cases as they relate to the law. Its proceedings are similar to those of the United States Supreme Court: no testimony is given by witnesses, and the lawyers of the two sides each present oral arguments no longer than thirty minutes. Following a court proceeding, the court may take several months to arrive at a judgment. , the Chief Justice is Richard A. Robinson.
In 1818, the court became a separate entity, independent of the legislative and executive branches. The Connecticut Appellate Court is a lesser statewide court, and the Superior Courts are lower courts that resemble county courts of other states.
Local government.
Connecticut does not have county government, unlike all other states except Rhode Island. Connecticut county governments were mostly eliminated in 1960, with the exception of sheriffs elected in each county. In 2000, the county sheriff was abolished and replaced with the state marshal system, which has districts that follow the old county territories. The judicial system is divided into judicial districts at the trial-court level which largely follow the old county lines. The eight counties are still widely used for purely geographical and statistical purposes, such as weather reports and census reporting, although the latter will cease using the counties in 2024.
The state is divided into nine regional councils of government defined by the state Office of Planning and Management, which facilitate regional planning and coordination of services between member towns. The Intragovernmental Policy Division of this Office coordinates regional planning with the administrative bodies of these regions. Each region has an administrative body made up chief executive officers of the member towns. The regions are established for the purpose of planning "coordination of regional and state planning activities; redesignation of logical planning regions and promotion of the continuation of regional planning organizations within the state; and provision for technical aid and the administration of financial assistance to regional planning organizations". By 2015, the State of Connecticut recognized COGs as county equivalents, allowing them to apply for funding and grants made available to county governments in other states. In 2019 the state recommended to the United States Census Bureau that the nine Councils of Governments replace its counties for statistical purposes. This proposal was approved by the Census Bureau in 2022, and will be fully implemented by 2024.
Connecticut shares with the rest of New England a governmental institution called the New England town. The state is divided into 169 towns which serve as the fundamental political jurisdictions. There are also 21 cities, most of which simply follow the boundaries of their namesake towns and have a merged city-town government. There are two exceptions: the City of Groton, which is a subsection of the Town of Groton, and the City of Winsted in the Town of Winchester. There are also nine incorporated boroughs which may provide additional services to a section of town. Naugatuck is a consolidated town and borough.
Politics.
Connecticut is a blue state. As of 2024, both of its U.S. Senators, all five of its U.S. House representatives, as well as its Governor, Lt. Governor, Attorney General, and Secretary of State, are members of the Democratic Party. The last Republican presidential candidate to win Connecticut's votes in the Electoral College was George H. W. Bush in 1988.
Registered voters.
Connecticut residents who register to vote may declare an affiliation to a political party, may become unaffiliated at will, and may change affiliations subject to certain waiting periods. , around 58% of registered voters are enrolled in a political party. The Democratic Party of Connecticut is the largest party in the state by voter registration, with 36% of voters, followed by the Connecticut Republican Party with approximately 21%. An additional 1.6% are registered to third parties. As of 2022, 4 third parties have statewide enrollment privileges (meaning any state resident may register as a member), including the Libertarian Party of Connecticut, the Independent Party of Connecticut, the Connecticut Green Party, and the Connecticut Working Families Party. Connecticut allows electoral fusion, where the same candidate can run on the ballot of more than one political party; this is often used by the Connecticut Working Families Party to cross-endorse Democratic candidates.
Voting.
In July 2009, the Connecticut legislature overrode a veto by Governor M. Jodi Rell to pass SustiNet, the first significant public-option health care reform legislation in the nation.
In April 2012, both houses of the Connecticut state legislature passed a bill (20 to 16 and 86 to 62) that abolished capital punishment for all future crimes, while 11 inmates who were waiting on the death row at the time could still be executed.
Education.
Connecticut ranked third in the nation for educational performance, according to Education Week's Quality Counts 2018 report. It earned an overall score of 83.5 out of 100 points. On average, the country received a score of 75.2. Connecticut posted a B-plus in the Chance-for-Success category, ranking fourth on factors that contribute to a person's success both within and outside the K-12 education system. Connecticut received a mark of B-plus and finished fourth for School Finance. It ranked 12th with a grade of C on the K-12 Achievement Index.
K–12.
Public schools.
Hartford Public High School (1638) is the third-oldest secondary school in the nation after the Collegiate School (1628) in Manhattan and the Boston Latin School (1635). Today, the Connecticut State Board of Education manages the public school system for children in grades K–12. Board of Education members are appointed by the Governor of Connecticut.
Private schools.
Connecticut has a number of private schools. Private schools may file for approval by the state Department of Education, but are not required to. Per state law, private schools must file yearly attendance reports with the state.
Notable private schools include Choate Rosemary Hall, The Hotchkiss School, Loomis Chaffee School, and Taft School.
Colleges and universities.
Connecticut was home to the nation's first law school, Litchfield Law School, which operated from 1773 to 1833 in Litchfield. Well known universities in the state include Yale University, Wesleyan University, Trinity College, Sacred Heart University, Fairfield University, Quinnipiac University, and the University of Connecticut. The Connecticut State University System includes 4 state universities, and the state also has 12 community colleges. The United States Coast Guard Academy is located in New London.
Sports.
There are two Connecticut teams in the American Hockey League. The Bridgeport Islanders is a farm team for the New York Islanders which competes at the Total Mortgage Arena in Bridgeport. The Hartford Wolf Pack is an affiliate of the New York Rangers; they play in the PeoplesBank Arena in Hartford.
The Hartford Yard Goats of the Double-A Northeast are a AA affiliate of the Colorado Rockies. Also, the Norwich Sea Unicorns play in the Futures Collegiate Baseball League. The New Britain Bees play in the Atlantic League of Professional Baseball. The Connecticut Sun of the WNBA currently play at the Mohegan Sun Arena in Uncasville. In soccer, Hartford Athletic began play in the USL Championship in 2019.
The state hosts several major sporting events. Since 1952, a PGA Tour golf tournament has been played in the Hartford area. It was originally called the "Insurance City Open" and later the "Greater Hartford Open" and is now known as the Travelers Championship.
Lime Rock Park in Salisbury is a road racing course, home to the International Motor Sports Association, SCCA, United States Auto Club, and K&N Pro Series East races. Thompson International Speedway, Stafford Motor Speedway, and Waterford Speedbowl are oval tracks holding weekly races for NASCAR Modifieds and other classes, including the NASCAR Whelen Modified Tour. The state also hosts several major mixed martial arts events for Bellator MMA and the Ultimate Fighting Championship.
Professional sports teams.
The Hartford Whalers of the National Hockey League played in Hartford from 1975 to 1997 at the Hartford Civic Center. They departed to Raleigh, North Carolina, after disputes with the state over the construction of a new arena, and they are now known as the Carolina Hurricanes. A baseball team known as the Hartfords (or Hartford Dark Blues) played in the National Association from 1874 to 1875, before becoming charter members of the National League in 1876. The team moved to Brooklyn, New York, and then disbanded one season later. In 1926, Hartford also had a franchise in the National Football League known as the Hartford Blues. From 2000 until 2006 the city was home to the Hartford FoxForce of World TeamTennis.
College sports.
The Connecticut Huskies are the team of the University of Connecticut (UConn); they play NCAA Division I sports. Both the men's basketball and women's basketball teams have won multiple national championships. In 2004, UConn became the first school in NCAA DivisionI history to have its men's and women's basketball programs win the national title in the same year; they repeated the feat in 2014 and are still the only DivisionI school to win both titles in the same year. The UConn women's basketball team holds the record for the longest consecutive winning streak in NCAA college basketball at 111 games, a streak that ended in 2017. Both teams play in the historic Harry A. Gampel Pavilion and PeoplesBank Arena in Hartford. The UConn Huskies football team has played in the Football Bowl Subdivision since 2002, and has played in four bowl games.
New Haven biennially hosts "The Game" between the Yale Bulldogs and the Harvard Crimson, the country's second-oldest college football rivalry. Yale alumnus Walter Camp is deemed the "Father of American Football", and he helped develop modern football while living in New Haven. Other Connecticut universities which feature DivisionI sports teams are Quinnipiac University, Fairfield University, Central Connecticut State University and Sacred Heart University.
Etymology and symbols.
The name "Connecticut" originated with the Mohegan word "quonehtacut", meaning "place of long tidal river". Connecticut's official nickname is "The Constitution State", adopted in 1959 and based on its colonial constitution of 1638–1639 which was the first in America and, arguably, the world. Connecticut is also unofficially known as "The Nutmeg State", whose origin is unknown. It may have come from its sailors returning from voyages with nutmeg, which was a very valuable spice in the 18th and 19th centuries. It may have originated in the early machined sheet tin nutmeg grinders sold by early Connecticut peddlers. It is also facetiously said to come from Yankee peddlers from Connecticut who would sell small carved knobs of wood shaped to look like nutmeg to unsuspecting customers. George Washington gave Connecticut the title of "The Provisions State" because of the material aid that the state rendered to the American Revolutionary War effort. Connecticut is also known as "The Land of Steady Habits".
According to "Webster's New International Dictionary" (1993), a person who is a native or resident of Connecticut is a "Connecticuter". There are numerous other terms coined in print but not in use, such as "Connecticotian" (Cotton Mather in 1702) and "Connecticutensian" (Samuel Peters in 1781). Linguist Allen Walker Read suggests the more playful term "Connecticutie". "Nutmegger" is sometimes used, as is "Yankee".
The official state song is "Yankee Doodle". The traditional abbreviation of the state's name is "Conn."; the official postal abbreviation is CT.
Commemorative stamps issued by the United States Postal Service with Connecticut themes include Nathan Hale, Eugene O'Neill, Josiah Willard Gibbs, Noah Webster, Eli Whitney, the whaling ship the "Charles W. Morgan", which is docked at Mystic Seaport, and a decoy of a broadbill duck.
External links.
Library of Congress
|
6468
|
31504254
|
https://en.wikipedia.org/wiki?curid=6468
|
Country Liberal Party
|
The Country Liberal Party of the Northern Territory (CLP), commonly known as the Country Liberals, is a centre-right and conservative political party in Australia's Northern Territory. In territory politics, it operates in a two-party system with the Australian Labor Party (ALP). It also contests federal elections as an affiliate of the Liberal Party of Australia and National Party of Australia, the two partners in the federal coalition.
The CLP originated in 1971 as a division of the Country Party (later renamed the National Party), the first local branches of which were formed in 1966. It adopted its current name in 1974 to attract Liberal Party supporters, but maintained a sole affiliation with the Country Party until 1979, when it acquired observer status with the Liberals while maintaining full voting rights in the Country Party. The party dominated the Northern Territory Legislative Assembly from the inaugural election in 1974 through to its defeat at the 2001 election, winning eight consecutive elections and providing the territory's first seven chief ministers. Following its defeat in 2001, the party did not return to power until 2012, but was defeated at the 2016 election. It remained in opposition until the 2024 election, in which it regained government in a landslide and the party's leader Lia Finocchiaro, who was elected party leader and leader of the opposition in February 2020, became Chief Minister.
At federal level, the CLP contests elections for the Northern Territory's House of Representatives and Senate seats, which also cover the Australian Indian Ocean Territories. It is registered with the Australian Electoral Commission (AEC). Its candidates do not form a separate parliamentary party but instead join either the Liberal or National party rooms – for instance, CLP senator Nigel Scullion was a long-serving deputy leader of the Nationals. Its sole current federal legislator Jacinta Nampijinpa Price, also a senator, sits with the Liberal Party.
The CLP's constitution describes it as an "independent conservative" party and commits it to Northern Territory statehood. It has typically prioritised economic development of the territory and originally drew most of its support from Outback towns and the pastoral industry. It later developed a voter base among the urban middle-class populations of Darwin, Palmerston and Alice Springs (the latter two of which are strongholds for the party). The CLP party provided the territory's first Indigenous MP (Hyacinth Tungutalum) and Australia's first Indigenous head of government (Adam Giles).
History.
Origins.
A party system did not develop in the Northern Territory until the 1960s, due to its small population and lack of regular elections. The Australian Labor Party (ALP) contested elections as early as 1905, but rarely faced an organised opposition; anti-Labor candidates usually stood as independents. The regionalist North Australia Party (NAP), established by Lionel Rose for the 1965 Legislative Council election, has been cited as a predecessor of the CLP.
A Darwin branch of the Country Party was established on 20 July 1966, following by an Alice Springs branch on 29 July. The creation of the branches was spurred by the upcoming 1966 federal election and the announcement by the Northern Territory's federal MP Jock Nelson that he would be retiring from politics. The Country Party achieved its first electoral success with the election of Sam Calder as Nelson's replacement. It subsequently won four out of eleven seats at the 1968 Legislative Council election. A third branch of the party was established in Katherine in February 1971. The branches affiliated with the Federal Council of the Australian Country Party in July 1971, establishing a formal entity with a central council, executive and annual conference. The party was formally named the "Australian Country Party – Northern Territory".
The Country Party primarily drew its support from Alice Springs, small towns, and the pastoral industry, including "a fair proportion of the non-urban Aboriginal vote". The party did not have a strong presence in Darwin. A branch of the Liberal Party, the Country Party's coalition partner at a federal level, had been established in Darwin in 1966, representing commercial interests and urban professionals. The Liberals fielded candidates at the 1968 Legislative Council elections, but by 1970 the local branch had ceased to function. In 1973, the Country Party began actively working to include Liberal supporters within its organisation, spurred by the Whitlam government's announcement of a fully elective Northern Territory Legislative Assembly. Following informal negotiations led by Goff Letts, a joint committee was established to determine changes to the Country Party's constitution and policy. These were officially approved, along with the adoption of the name Country Liberal Party, at the party's annual conference in Alice Springs on 20 July 1974. Per its 2018 constitution, the party reckons 1974 as its founding date.
1974–2001: Foundation and early dominance.
The Whitlam government passed legislation in 1974 to establish a fully elected unicameral Northern Territory Legislative Assembly, replacing the previous partly elected Legislative Council, which had been in existence since 1947. The CLP won 17 out of 19 seats at the inaugural elections in October 1974, with independents holding the other two seats. Goff Letts became the inaugural majority leader, a title changed to chief minister after the granting of self-government in 1978. The CLP governed the Northern Territory from 1974 until the 2001 election. During this time, it never faced more than nine opposition members. Indeed, the CLP's dominance was so absolute that its internal politics were seen as a bigger threat than any opposition party. This was especially pronounced in the mid-1980s, when a series of party-room coups resulted in the Territory having three Chief Ministers during the 1983–87 term and also saw the creation of the Northern Territory Nationals as a short-lived splinter group under the leadership of former CLP chief minister Ian Tuxworth. According to ABC election analyst Antony Green, the CLP weathered these severe ructions because Territory Labor was "unelectable" at the time.
The Whitlam government also passed legislation to give the Northern Territory and Australian Capital Territory (ACT) representation in the federal Senate, with each territory electing two senators. Bernie Kilgariff was elected as the CLP's first senator at the 1975 federal election, sitting alongside Sam Calder in the parliamentary National Country Party. On 3 February 1979 a special conference of the CLP resolved that "the Federal CLP Parliamentarians be permitted to sit in the Party Rooms of their choice in Canberra". Despite personal misgivings, Kilgariff chose to sit with the parliamentary Liberal Party from 8 March 1979 in order that the CLP have representation in both parties, a practice which has been maintained where possible.
2001–2012: In opposition.
At the 2001 election, the Australian Labor Party won government by one seat, ending 27 years of CLP government. The loss marked a major turning point in Northern Territory politics, a result which was exacerbated when, at the 2005 election, the ALP won the second-largest majority government in the history of the Territory, reducing the once-dominant party to just four members in the Legislative Assembly. This result was only outdone by the 1974 election, in which the CLP faced only two independents as opposition. The CLP even lost two seats in Palmerston, an area where the ALP had never come close to winning any seats before.
In the 2001 federal election, the CLP won the newly formed seat of Solomon, based on Darwin/Palmerston, in the House of Representatives.
In the 2004 federal election, the CLP held one seat in the House of Representatives, and one seat in the Senate. The CLP lost its federal lower house seat in the 2007 federal election, but regained it when Palmerston deputy mayor Natasha Griggs won back Solomon for the CLP. She sat with the Liberals in the House.
The 2008 election saw the CLP recover from the severe loss it suffered three years earlier, increasing its representation from four to 11 members. Following the 2011 decision of ALP-turned-independent member Alison Anderson to join the CLP, this increased CLP's representation to 12 in the Assembly, leaving the incumbent Henderson Government to govern in minority with the support of Independent MP Gerry Wood.
Historically, the CLP has been particularly dominant in the Territory's two major cities, Darwin/Palmerston and Alice Springs. However, in recent years the ALP has pulled even with the CLP in the Darwin area; indeed, its 2001 victory was fueled by an unexpected swing in Darwin.
2012–2016: Return to government and internal conflict.
The CLP under the leadership of Terry Mills returned to power in the 2012 election with 16 of 25 seats, defeating the incumbent Labor government led by Paul Henderson. In the lead up to the Territory election, CLP Senator Nigel Scullion sharply criticised the Federal Labor government for its suspension of the live cattle trade to Indonesia - an economic mainstay of the territory.
The election victory ended 11 years of ALP rule in the Northern Territory. The victory was also notable for the support it achieved from indigenous people in pastoral and remote electorates. Large swings were achieved in remote Territory electorates (where the indigenous population comprised around two-thirds of voters) and a total of five Aboriginal CLP candidates won election to the Assembly. Among the indigenous candidates elected were high-profile Aboriginal activist Bess Price and former ALP member Alison Anderson. Anderson was appointed Minister for Indigenous Advancement. In a nationally reported speech in November 2012, Anderson condemned welfare dependency and a culture of entitlement in her first ministerial statement on the status of Aboriginal communities in the Territory and said the CLP would focus on improving education and on helping create real jobs for indigenous people.
Leadership spills.
Adam Giles replaced Mills as Chief Minister of the Northern Territory and party leader at the 2013 CLP leadership ballot on 13 March while Mills was on a trade mission in Japan. Giles was sworn in as Chief Minister on 14 March, becoming the first indigenous head of government of an Australian state or territory.
Willem Westra van Holthe challenged Giles at the 2015 CLP leadership ballot on 2 February and was elected leader by the party room in a late night vote conducted by phone. However, Giles refused to resign as Chief Minister following the vote. On 3 February, "ABC News" reported that officials were preparing an instrument for Giles' removal by the Administrator. The swearing-in of Westra van Holthe, which had been scheduled for 11:00 local time (01:30 UTC), was delayed. After a meeting of the parliamentary wing of the CLP, Giles announced that he would remain as party leader and Chief Minister, and that Westra van Holthe would be his deputy.
Defections and minority government.
After four defections during the parliamentary term, the CLP was reduced to minority government by July 2015. Giles raised the possibility of an early election on 20 July stating that he would "love" to call a snap poll, but that it was "pretty much impossible to do". Crossbenchers dismissed the notion of voting against a confidence motion to bring down the government.
2016–2024: In opposition.
Territory government legislation passed in February 2016 changed the voting method of single-member electorates from full-preferential voting to optional preferential voting ahead of the 2016 territory election held on 27 August.
Federally, a MediaReach seat-level opinion poll of 513 voters in the seat of Solomon conducted 22−23 June ahead of the 2016 federal election held on 2 July surprisingly found Labor candidate Luke Gosling heavily leading two-term CLP incumbent Natasha Griggs 61–39 on the two-party vote from a large 12.4 percent swing. The CLP lost Solomon to Labor at the election, with Gosling defeating Griggs 56–44 on the two-party vote from a 7.4 percent swing.
Polling ahead of the 2016 Territory election indicated a large swing against the CLP, including a near-total collapse in Darwin/Palmerston. By the time the writs were dropped, commentators had almost universally written off the CLP. At 27 August Territory election, the CLP was swept from power in a massive Labor landslide, suffering easily the worst defeat of a sitting government in Territory history and one of the worst defeats a governing party has ever suffered at the state or territory level in Australia. The party not only lost all of the bush seats it picked up in 2012, but was all but shut out of Darwin/Palmerston, winning only one seat there. All told, the CLP only won two seats, easily its worst showing in an election. Giles himself lost his own seat, becoming the second Majority Leader/Chief Minister to lose his own seat. Even before Giles' defeat was confirmed, second-term MP Gary Higgins—the only surviving member of the Giles cabinet—was named the party's new leader, with Lia Finocchiaro as his deputy. On 20 January 2020, Higgins announced his resignation as party leader and announced his retirement at the next election. Finocchiaro succeeded him as CLP leader and leader of the opposition on 1 February 2020.
Finocchiaro led the CLP to a modest recovery at the 2020 Territory election. The CLP picked up a six-seat swing, boosting its seat count to eight. However, it failed to make significant inroads in the Darwin/Palmerston area, winning only two seats there, including that of Finocchiaro.
The CLP lost the seat of Daly to Labor in a 2021 by-election, the first time an incumbent government had won a seat from the opposition in territory history.
The CLP won a landslide victory in the 2024 Northern Territory general election.
Ideology.
The CLP stands for office in the Northern Territory Assembly and Federal Parliament of Australia and primarily concerns itself with representing Territory interests. It is a regionally based party, that has parliamentary representation in both the Federal Parliament and at the territory level. It brands as a party with strong roots in the Territory.
The CLP competes against the Territory Labor Party (the local branch of Australia's largest social democratic party). It is closely affiliated with, but is independent from the Liberal Party of Australia (a mainly urban, pro-business party comprising mainly liberal membership) and the National Party of Australia (a conservative and regional interests party).
The foreword to the constitution of the party describes it as an "independent conservative political party". One of the objectives in the party's constitution is to "work toward the achievement of Statehood in the Northern Territory". The party promotes traditional Liberal Party values such as individualism and private enterprise, and what it describes as "progressive" political policy such as full statehood for the Northern Territory.
In February 2023, the party voted to oppose the Voice to Parliament.
Voter base.
Traditionally, the CLP's voting base has been mostly concentrated in Palmerston, Alice Springs, Katherine and parts of Darwin, as well as in rural towns where the majority of people are white.
Initially, remote Indigenous communities around Australia voted strongly for Labor, but in recent years, Indigenous Australians have been more frequently voting for the Coalition, particularly in remote communities. At the same time, Labor has become stronger in Darwin and Palmerston. At the 2012 general election, the CLP won government by gaining five remote seats where the majority of the population identify as Aboriginal and that were traditionally considered safe seats for Labor. In 2016, the CLP was defeated by Labor in a landslide, and thus lost most of its ground territory-wide. However, in 2020, the CLP gained back some of its ground in remote areas (including narrowly gaining the seat of Barkly, which they did not win in 2012, with a huge swing to them).
The CLP's rule was once so tight, that a former minister once said the CLP had a "'rightful inheritance of being the party that runs this place'".
Demographics.
A 2023 poll conducted by the Redbridge Group, which found that the CLP would win the 2024 general election in a landslide, looked at demographics by voting intention in the Northern Territory. The poll found that the CLP has a support base among many demographics. The poll found that the CLP is overwhelmingly more popular than Labor among middle and high-income earners and people over 40, and that the CLP had more support than Labor among both Indigenous and non-Indigenous people, English and non-English speakers, and both men and women. The poll found that people aged between 18 and 40 are still more likely to vote for the CLP than they are any party, but by a smaller margin than people over 40.
As less parties and candidates contest Northern Territory general elections than they do Australian federal elections in the Northern Territory, the CLP, Labor and independents usually have a higher vote share at territory elections than at federal elections in the Northern Territory due to the absence of right-wing minor parties such as Pauline Hanson's One Nation and the fact that the Greens do not run in every seat at territory elections. On the territory level, the Redbridge poll found that 25% of One Nation supporters would vote for the CLP on the territory level, second to only the Shooters, Fishers and Farmers Party (SFF) at 33%.
Organisation.
Branch delegates and members of the party's Central Council attend the Annual Conference of the Country Liberal Party to decide the party's platform. The Central Council is composed of the party's office bearers, its leaders from the Territory Assembly and the Federal Parliament and representatives of party branches.
The Annual Conference of the Country Liberal Party, attended by branch delegates and members of the party's Central Council, decides matters relating to the party's platform and philosophy. The Central Council administers the party and makes decisions on pre-selections. It is composed of the party's office bearers, its leaders in the Northern Territory Legislative Assembly, members in the Federal Parliament, and representation from each of the party's branches.
The CLP president has full voting rights with the National Party and observer status with the Liberal Party. Both the Liberals and Nationals receive Country Liberal delegations at their conventions. After federal elections, the CLP directs its federal members and senators as to which of the two other parties they should sit with in the parliamentary chamber. In practice, since the 1980s CLP House members usually sit with the Liberals, while CLP Senators usually sit with the Nationals.
|
6469
|
49576480
|
https://en.wikipedia.org/wiki?curid=6469
|
Canon law
|
Canon law (from , , a 'straight measuring rod, ruler') is a set of ordinances and regulations made by ecclesiastical authority (church leadership) for the government of a Christian organization or church and its members.
Canon law includes the internal ecclesiastical law, or operational policy, governing the Catholic Church (both the Latin Church and the Eastern Catholic Churches), the Eastern Orthodox and Oriental Orthodox churches, and the individual national churches within the Anglican Communion. The way that such church law is legislated, interpreted and at times adjudicated varies widely among these four bodies of churches. In all three traditions, a canon was originally a rule adopted by a church council; these canons formed the foundation of canon law.
Etymology.
Greek / , Arabic / , Hebrew / , 'straight'; a rule, code, standard, or measure; the root meaning in all these languages is 'reed'; see also the Romance-language ancestors of the English word "cane".
In the fourth century, the First Council of Nicaea (325) calls canons the disciplinary measures of the church: the term canon, κανὠν, means in Greek, a rule. There is a very early distinction between the rules enacted by the church and the legislative measures taken by the state called , Latin for laws.
Apostolic Canons.
The "Apostolic Canons" or "Ecclesiastical Canons of the Same Holy Apostles" is a collection of ancient ecclesiastical decrees (eighty-five in the Eastern, fifty in the Western Church) concerning the government and discipline of the Early Christian Church, incorporated with the Apostolic Constitutions which are part of the Ante-Nicene Fathers.
Catholic Church.
In the Catholic Church, canon law is the system of laws and legal principles made and enforced by the church's hierarchical authorities to regulate its external organization and government and to order and direct the activities of Catholics toward the mission of the church. It was the first modern Western legal system and is the oldest continuously functioning legal system in the West.
In the Latin Church, positive ecclesiastical laws, based directly or indirectly upon immutable divine law or natural law, derive formal authority in the case of universal laws from the supreme legislator (i.e., the Supreme Pontiff), who possesses the totality of legislative, executive, and judicial power in his person, while particular laws derive formal authority from a legislator inferior to the supreme legislator. The actual subject material of the canons is not just doctrinal or moral in nature, but all-encompassing of the human condition, and therefore extending beyond what is taken as revealed truth.
The Catholic Church also includes the main five rites (groups) of churches which are in full union with the Holy See and the Latin Church:
All of these church groups are in full communion with the Supreme Pontiff and are subject to the "Code of Canons of the Eastern Churches".
History, sources of law, and codifications.
The Catholic Church has what is claimed to be the oldest continuously functioning internal legal system in Western Europe, much later than Roman law but predating the evolution of modern European civil law traditions.
The history of Latin canon law can be divided into four periods: the "jus antiquum", the "jus novum", the "jus novissimum" and the "Code of Canon Law". In relation to the Code, history can be divided into the "jus vetus" (all law before the Code) and the "jus novum" (the law of the Code, or "jus codicis").
The canon law of the Eastern Catholic Churches, which had developed some different disciplines and practices, underwent its own process of codification, resulting in the Code of Canons of the Eastern Churches promulgated in 1990 by Pope John Paul II.
Catholic canon law as legal system.
Roman Catholic canon law is a fully developed legal system, with all the necessary elements: courts, lawyers, judges, a fully articulated legal code, principles of legal interpretation, and coercive penalties, though it lacks civilly-binding force in most secular jurisdictions. One example where conflict between secular and canon law occurred was in the English legal system, as well as systems, such as the U.S., that derived from it. Here criminals could apply for the benefit of clergy. Being in holy orders, or fraudulently claiming to be, meant that criminals could opt to be tried by ecclesiastical rather than secular courts. The ecclesiastical courts were generally more lenient. Under the Tudors, the scope of clerical benefit was steadily reduced by Henry VII, Henry VIII, and Elizabeth I. The papacy disputed secular authority over priests' criminal offenses. The benefit of clergy was systematically removed from English legal systems over the next 200 years, although it still occurred in South Carolina in 1855.
In English Law, the use of this mechanism, which by that point was a legal fiction used for first offenders, was abolished by the Criminal Law Act 1827.
The academic degrees in Catholic canon law are the J.C.B. ("Juris Canonici Baccalaureatus", Bachelor of Canon Law, normally taken as a graduate degree), J.C.L. ("Juris Canonici Licentiatus", Licentiate of Canon Law) and the J.C.D. ("Juris Canonici Doctor", Doctor of Canon Law). Because of its specialized nature, advanced degrees in civil law or theology are normal prerequisites for the study of canon law.
Much of Catholic canon law's legislative style was adapted from the Roman Code of Justinian. As a result, Roman ecclesiastical courts tend to follow the Roman Law style of continental Europe with some variation, featuring collegiate panels of judges and an investigative form of proceeding, called "inquisitorial", from the Latin "inquirere", to enquire. This is in contrast to the adversarial form of proceeding found in the common law system of English and U.S. law, which features such things as juries and single judges.
The institutions and practices of Catholic canon law paralleled the legal development of much of Europe, and consequently, both modern civil law and common law bear the influences of canon law. As Edson Luiz Sampel, a Brazilian expert in Catholic canon law, says, canon law is contained in the genesis of various institutes of civil law, such as the law in continental Europe and Latin American countries. Indirectly, canon law has significant influence in contemporary society.
Catholic Canonical jurisprudential theory generally follows the principles of Aristotelian-Thomistic legal philosophy. While the term "law" is never explicitly defined in the Catholic Code of Canon Law, the "Catechism of the Catholic Church" cites Aquinas in defining law as "an ordinance of reason for the common good, promulgated by the one who is in charge of the community" and reformulates it as "a rule of conduct enacted by competent authority for the sake of the common good".
Code for the Eastern Churches.
The law of the Eastern Catholic Churches in full communion with the Roman papacy was in much the same state as that of the Latin Church before 1917; much more diversity in legislation existed in the various Eastern Catholic Churches. Each had its own special law, in which custom still played an important part. One major difference in Eastern Europe however, specifically in the Eastern Orthodox Christian churches, was in regards to divorce. Divorce started to slowly be allowed in specific instances such as adultery being committed, abuse, abandonment, impotence, and barrenness being the primary justifications for divorce. Eventually, the church began to allow remarriage to occur (for both spouses) post-divorce. In 1929 Pius XI informed the Eastern Churches of his intention to work out a Code for the whole of the Eastern Church. The publication of these Codes for the Eastern Churches regarding the law of persons was made between 1949 through 1958 but finalized nearly 30 years later.
The first Code of Canon Law (1917) was exclusively for the Latin Church, with application to the Eastern Churches only "in cases which pertain to their very nature". After the Second Vatican Council (1962 - 1965), the Vatican produced the "Code of Canons of the Eastern Churches" which became the first code of Eastern Catholic Canon Law.
Eastern Orthodox Church.
The Eastern Orthodox Church, principally through the work of 18th-century Athonite monastic scholar Nicodemus the Hagiorite, has compiled canons and commentaries upon them in a work known as the (, 'Rudder'), so named because it is meant to "steer" the church in her discipline. The dogmatic determinations of the Councils are to be applied rigorously since they are considered to be essential for the church's unity and the faithful preservation of the Gospel.
Anglican Communion.
In the Church of England, the ecclesiastical courts that formerly decided many matters such as disputes relating to marriage, divorce, wills, and defamation, still have jurisdiction of certain church-related matters (e.g. discipline of clergy, alteration of church property, and issues related to churchyards). Their separate status dates back to the 12th century when the Normans split them off from the mixed secular/religious county and local courts used by the Saxons. In contrast to the other courts of England, the law used in ecclesiastical matters is at least partially a civil law system, not common law, although heavily governed by parliamentary statutes. Since the Reformation, ecclesiastical courts in England have been royal courts. The teaching of canon law at the Universities of Oxford and Cambridge was abrogated by Henry VIII; thereafter practitioners in the ecclesiastical courts were trained in civil law, receiving a Doctor of Civil Law (D.C.L.) degree from Oxford, or a Doctor of Laws (LL.D.) degree from Cambridge. Such lawyers (called "doctors" and "civilians") were centered at "Doctors Commons", a few streets south of St Paul's Cathedral in London, where they monopolized probate, matrimonial, and admiralty cases until their jurisdiction was removed to the common law courts in the mid-19th century.
Other churches in the Anglican Communion around the world (e.g., the Episcopal Church in the United States and the Anglican Church of Canada) still function under their own private systems of canon law.
In 2002 a Legal Advisors Consultation meeting at Canterbury concluded:(1) There are principles of canon law common to the churches within the Anglican Communion; (2) Their existence can be factually established; (3) Each province or church contributes through its own legal system to the principles of canon law common within the Communion; (4) these principles have strong persuasive authority and are fundamental to the self-understanding of each of the member churches; (5) These principles have a living force, and contain within themselves the possibility for further development; and (6) The existence of the principles both demonstrates and promotes unity in the Communion.
Presbyterian and Reformed churches.
In Presbyterian and Reformed churches, canon law is known as "practice and procedure" or "church order", and includes the church's laws respecting its government, discipline, legal practice, and worship.
Roman canon law had been criticized by the Presbyterians as early as 1572 in the Admonition to Parliament. The protest centered on the standard defense that canon law could be retained so long as it did not contradict the civil law. According to Polly Ha, the Reformed church government refuted this, claiming that the bishops had been enforcing canon law for 1500 years.
Lutheranism.
The Book of Concord is the historic doctrinal statement of the Lutheran Church, consisting of ten credal documents recognized as authoritative in Lutheranism since the 16th century. However, the Book of Concord is a confessional document (stating orthodox belief) rather than a book of ecclesiastical rules or discipline, like canon law. Each Lutheran national church establishes its own system of church order and discipline, though these are referred to as "canons".
United Methodist Church.
The Book of Discipline contains the laws, rules, policies, and guidelines for The United Methodist Church. Its latest edition was published in 2024.
External links.
Catholic
Anglican
|
6501
|
49211129
|
https://en.wikipedia.org/wiki?curid=6501
|
Columbanus
|
Saint Columbanus (; 543 – 23 November 615) was an Irish missionary notable for founding a number of monasteries after 590 in the Frankish and Lombard kingdoms, most notably Luxeuil Abbey in present-day France and Bobbio Abbey in present-day Italy.
Columbanus taught an Irish monastic rule and penitential practices for those repenting of sins, which emphasised private confession to a priest, followed by penances imposed by the priest in reparation for the sins. Columbanus is one of the earliest identifiable Hiberno-Latin writers.
Sources.
Most of what we know about Columbanus is based on Columbanus' own works (as far as they have been preserved) and Jonas of Susa's "Vita Columbani" ("Life of Columbanus"), which was written between 639 and 641.
Jonas entered Bobbio after Columbanus' death but relied on reports of monks who still knew Columbanus. A description of miracles of Columbanus written by an anonymous monk of Bobbio is of much later date. In the second volume of his "Acta Sanctorum O.S.B.", Mabillon gives the life in full, together with an appendix on the miracles of Columbanus, written by an anonymous member of the Bobbio community.
Biography and early life.
Columbanus (the Latinised form of "Colmán", meaning "little dove") was born in Leinster, Ireland in 543. After his conception, his mother was said to have had a vision of her child's "remarkable genius".
He was first educated under Abbot Sinell of Cluaninis, whose monastery was on an island of the River Erne, in modern County Fermanagh. Under Sinell's instruction, Columbanus composed a commentary on the Psalms.
Columbanus then moved to Bangor Abbey where he studied to become a teacher of the Bible. He was well-educated in the areas of grammar, rhetoric, geometry, and the Holy Scriptures. Abbot Comgall taught him Greek and Latin. He stayed at Bangor until c. 590, when Comgall reluctantly gave him permission to travel to the continent.
Frankish Gaul (c. 590 – 610).
Columbanus set sail with twelve companions: Attala, Columbanus the Younger, Gallus, Domgal, Cummain, Eogain, Eunan, Gurgano, Libran, Lua, Sigisbert and Waldoleno. They crossed the channel via Cornwall and landed in Saint-Malo, Brittany.
Columbanus then entered Burgundian France. Jonas writes that:At that time, either because of the numerous enemies from without, or on account of the carelessness of the bishops, the Christian faith had almost departed from that country. The creed alone remained. But the saving grace of penance and the longing to root out the lusts of the flesh were to be found only in a few. Everywhere that he went the noble man [Columbanus] preached the Gospel. And it pleased the people because his teaching was adorned by eloquence and enforced by examples of virtue.Columbanus and his companions were welcomed by King Guntram of Burgundy, who granted them land at Anegray, where they converted a ruined Roman fortress into a school. Despite its remote location in the Vosges Mountains, the school rapidly attracted so many students that they moved to a new site at Luxeuil and then established a second school at Fontaines. These schools remained under Columbanus' authority, and their rules of life reflected the Celtic tradition in which he had been educated.
As these communities expanded and drew more pilgrims, Columbanus sought greater solitude. Often he would withdraw to a cave seven miles away, with a single companion who acted as messenger between himself and his companions.
Conflict with Frankish Bishops.
Tensions arose in 603 CE when St. Columbanus and his followers argued with Frankish bishops over the exact date of Easter. (St. Columbanus celebrated Easter according to Celtic rites and the Celtic Christian calendar.)
The Frankish bishops may have feared his growing influence. During the first half of the sixth century, the councils of Gaul had given to bishops absolute authority over religious communities. Celtic Christians, Columbanus and his monks used the Irish Easter calculation, a version of Bishop Augustalis's 84-year for determining the date of Easter (quartodecimanism), whereas the Franks had adopted the Victorian cycle of 532 years. The bishops objected to the newcomers' continued observance of their own dating, which – among other issues – caused the end of Lent to differ. They also complained about the distinct Irish tonsure.
In 602, the bishops assembled to judge Columbanus, but he did not appear before them as requested. Instead, he sent a letter to the prelates – a strange mixture of freedom, reverence, and charity – admonishing them to hold synods more frequently, and advising them to pay more attention to matters of equal importance to that of the date of Easter. In defence of his following his traditional paschal cycle, he wrote:
When the bishops refused to abandon the matter, Columbanus appealed directly to Pope Gregory I. In the third and only surviving letter, he asks "the holy Pope, his Father" to provide "the strong support of his authority" and to render a "verdict of his favour", apologising for "presuming to argue as it were, with him who sits in the chair of Peter, Apostle and Bearer of the Keys". None of the letters were answered, most likely due to the pope's death in 604.
Columbanus then sent a letter to Gregory's successor, Pope Boniface IV, asking him to confirm the tradition of his elders – if it was not contrary to the Faith – so that he and his monks could follow the rites of their ancestors. Before Boniface responded, Columbanus moved outside the jurisdiction of the Frankish bishops. As the Easter issue appears to end around that time, Columbanus may have stopped celebrating the Irish date of Easter after moving to Italy.
Conflict with Brunhilda of Austrasia.
Columbanus was also involved in a dispute with members of the Burgundian dynasty. Upon the death of King Guntram of Burgundy, the succession passed to his nephew, Childebert II, the son of his brother Sigebert and Sigebert's wife Brunhilda of Austrasia. When Childebert II died, his territories were divided between his two sons: Theuderic II inherited the Kingdom of Burgundy and Theudebert II inherited the Kingdom of Austrasia. Both were minors and Brunhilda, their grandmother, ruled as their regents.
Theuderic II "very often visited" Columbanus, but when Columbanus rebuked him for having a concubine, Brunhilda became his bitterest foe because she feared the loss of her influence if Theuderic II married. Brunhilda incited the court and Catholic bishops against Columbanus and Theuderic II confronted Columbanus at Luxeuil, accusing him of violating the "common customs" and "not allowing all Christians" in the monastery. Columbanus asserted his independence to run the monastery without interference and was imprisoned at Besançon for execution.
Columbanus escaped and returned to Luxeuil. When the king and his grandmother found out, they sent soldiers to drive him back to Ireland by force, separating him from his monks by insisting that only those from Ireland could accompany him into exile.
Columbanus was taken to Nevers, then travelled by boat down the Loire river to the coast. At Tours he visited the tomb of Martin of Tours, and sent a message to Theuderic II indicating that within three years he and his children would perish. When he arrived at Nantes, he wrote a letter before embarkation to his fellow monks at Luxeuil monastery. The letter urged his brethren to obey Attala, who stayed behind as abbot of the monastic community.
The letter concludes:
Soon after the ship set sail from Nantes, a severe storm drove the vessel back ashore. Convinced that his holy passenger caused the tempest, the captain refused further attempts to transport the monk. Columbanus found sanctuary with Chlothar II of Neustria at Soissons, who gave him an escort to the court of King Theudebert II of Austrasia.
The Alps (611–612).
Columbanus arrived at Theudebert II's court in Metz in 611, where members of the Luxeuil school met him and Theudebert II granted them land at Bregenz. They travelled up the Rhine via Mainz to the lands of the Suebi and Alemanni in the northern Alps, intending to preach the Gospel to these people. He followed the Rhine river and its tributaries, the Aar and the Limmat, and then on to Lake Zurich. Columbanus chose the village of Tuggen as his initial community, but the work was not successful. He continued north-east by way of Arbon to Bregenz on Lake Constance. Here he found an oratory dedicated to Aurelia of Strasbourg containing three brass images of their tutelary deities. Columbanus commanded Gallus, who knew the local language, to preach to the inhabitants, and many were converted. The three brass images were destroyed, and Columbanus blessed the little church, placing the relics of Aurelia beneath the altar. A monastery was erected, Mehrerau Abbey, and the brethren observed their regular life. Columbanus stayed in Bregenz for about one year.
In the spring of 612, war broke out between Austrasia and Burgundy and Theudebert II was resoundingly beaten by Theuderic II. Austrasia was subsumed under the kingdom of Burgundy and Columbanus was again vulnerable to Theuderic II's opprobrium. When Columbanus' students began to be murdered in the woods, Columbanus decided to cross the Alps into Lombardy.
Gallus remained in this area until his death in 646. About seventy years later at the place of Gallus' cell the Abbey of Saint Gall was founded. The city of St. Gallen originated as an adjoining settlement of the abbey.
Lombardy (612–615).
Columbanus arrived in Milan in 612 and was welcomed by King Agilulf and Queen Theodelinda of the Lombards. He immediately began refuting the teachings of Arianism, which had enjoyed a degree of acceptance in Italy. He wrote a treatise against Arianism, which has since been lost. In 614, Agilulf granted Columbanus land for a school at the site of a ruined church at Bobbio.
At the king's request, Columbanus wrote a letter to Pope Boniface IV on the controversy over the "Three Chapters" – writings by Syrian bishops suspected of Nestorianism, which had been condemned in the fifth century as heresy. Pope Gregory I had tolerated in Lombardy those persons who defended the "Three Letters", among them King Agilulf. Columbanus agreed to take up the issue on behalf of the king. The letter has a diplomatic tone and begins with an apology that a "foolish Scot" (, Irishman) would be writing for a Lombard king. After acquainting the pope with the imputations brought against him, he entreats the pontiff to prove his orthodoxy and assemble a council. When critiquing Boniface, he writes that his freedom of speech is consistent with the custom of his country. Some of the language used in the letter might now be regarded as disrespectful, but in that time, faith and austerity could be more indulgent. Columbanus was tactful when making critiques, as he begins the letter he expresses with the most affectionate and impassioned devotion to the Holy See.
Later, he reveals charges against the Papacy so as to encourage Boniface to make concessions:
Columbanus' deference towards Rome is sufficiently clear, calling the pope "his Lord and Father in Christ", the "Chosen Watchman", and the "First Pastor, set higher than all mortals", also asserting that "we Irish, inhabitants of the world’s edge, are disciples of Saints Peter and Paul and of all the disciples" and that "the unity of faith has produced in the whole world a unity of power and privilege."
King Agilulf gave Columbanus a tract of land called Bobbio between Milan and Genoa near the Trebbia river, situated in a defile of the Apennine Mountains, to be used as a base for the conversion of the Lombard people. The area contained a ruined church and wastelands known as "Ebovium", which had formed part of the lands of the papacy prior to the Lombard invasion. Columbanus wanted this secluded place, for while enthusiastic in the instruction of the Lombards he preferred solitude for his monks and himself. Next to the little church, which was dedicated to Peter the Apostle, Columbanus erected a monastery in 614. Bobbio Abbey at its foundation followed the Rule of Saint Columbanus, based on the monastic practices of Celtic Christianity. For centuries it remained the stronghold of orthodoxy in northern Italy.
Death.
During the last year of his life, Columbanus received messages from King Chlothar II, inviting him to return to Burgundy, now that his enemies were dead. Columbanus did not return, but requested that the king should always protect his monks at Luxeuil Abbey. He prepared for death by retiring to his cave on the mountainside overlooking the Trebbia river, where, according to a tradition, he had dedicated an oratory to Our Lady. Columbanus died at Bobbio on 21 November 615 and is buried there.
Rule of Saint Columbanus.
The Rule of Saint Columbanus embodied the customs of Bangor Abbey and other Irish monasteries. Much shorter than the Rule of Saint Benedict, the Rule of Saint Columbanus consists of ten chapters, on the subjects of obedience, silence, food, poverty, humility, chastity, choir offices, discretion, mortification, and perfection.
In the first chapter, Columbanus introduces the great principle of his Rule: obedience, absolute and unreserved. The words of seniors should always be obeyed, just as "Christ obeyed the Father up to death for us". One manifestation of this obedience was constant hard labour designed to subdue the flesh, exercise the will in daily self-denial, and set an example of industry in cultivation of the soil. The least deviation from the Rule entailed corporal punishment, or a severe form of fasting. In the second chapter, Columbanus instructs that the rule of silence be "carefully observed", since it is written: "But the nurture of righteousness is silence and peace". He also warns, "Justly will they be damned who would not say just things when they could, but preferred to say with garrulous loquacity what is evil". In the third chapter, Columbanus instructs, "Let the monks' food be poor and taken in the evening, such as to avoid repletion, and their drink such as to avoid intoxication, so that it may both maintain life and not harm". Columbanus continues:
In the fourth chapter, Columbanus presents the virtue of poverty and of overcoming greed, and that monks should be satisfied with "small possessions of utter need, knowing that greed is a leprosy for monks". Columbanus also instructs that "nakedness and disdain of riches are the first perfection of monks, but the second is the purging of vices, the third the most perfect and perpetual love of God and unceasing affection for things divine, which follows on the forgetfulness of earthly things. Since this is so, we have need of few things, according to the word of the Lord, or even of one." In the fifth chapter, Columbanus warns against vanity, reminding the monks of Jesus' warning in Luke 16:15: "You are the ones who justify yourselves in the eyes of others, but God knows your hearts. What people value highly is detestable in God's sight." In the sixth chapter, Columbanus instructs that "a monk's chastity is indeed judged in his thoughts" and warns, "What profit is it if he be virgin in body, if he be not virgin in mind? For God, being Spirit."
In the seventh chapter, Columbanus instituted a service of perpetual prayer, known as , by which choir succeeded choir, both day and night. In the eighth chapter, Columbanus stresses the importance of discretion in the lives of monks to avoid "the downfall of some, who beginning without discretion and passing their time without a sobering knowledge, have been unable to complete a praiseworthy life". Monks are instructed to pray to God to "illumine this way, surrounded on every side by the world's thickest darkness". Columbanus continues:
In the ninth chapter, Columbanus presents mortification as an essential element in the lives of monks, who are instructed, "Do nothing without counsel." Monks are warned to "beware of a proud independence, and learn true lowliness as they obey without murmuring and hesitation". According to the Rule, there are three components to mortification: "not to disagree in mind, not to speak as one pleases with the tongue, not to go anywhere with complete freedom". This mirrors the words of Jesus, "For I have come down from heaven not to do my will but to do the will of him who sent me." (John 6:38) In the tenth and final chapter, Columbanus regulates forms of penance (often corporal) for offences, and it is here that the Rule of Saint Columbanus differs significantly from that of Saint Benedict.
The Communal Rule of Columbanus required monks to fast every day until "None" or 3 p.m.; this was later relaxed and observed on designated days. Columbanus' Rule regarding diet was very strict. Monks were to eat a limited diet of beans, vegetables, flour mixed with water and a small bread of a loaf, taken in the evenings.
The habit of the monks consisted of a tunic of undyed wool, over which was worn the cuculla, or cowl, of the same material. A great deal of time was devoted to various kinds of manual labour, not unlike the life in monasteries of other rules. The Rule of Saint Columbanus was approved of by the Fourth Council of Mâcon in 627, but it was superseded at the close of the century by the Rule of Saint Benedict. For several centuries in some of the greater monasteries the two rules were observed conjointly.
Character.
Columbanus did not lead a perfect life. According to Jonas and other sources, he could be impetuous and even headstrong, for by nature he was eager, passionate, and dauntless. These qualities were both the source of his power and the cause of his mistakes. His virtues, however, were quite remarkable. Like many saints, he had a great love for God's creatures. Stories claim that as he walked in the woods, it was not uncommon for birds to land on his shoulders to be caressed, or for squirrels to run down from the trees and nestle in the folds of his cowl. Although a strong defender of Irish traditions, he never wavered in showing deep respect for the Holy See as the supreme authority. His influence in Europe was due to the conversions he effected and to the rule that he composed. It may be that the example and success of Columba in Caledonia inspired him to similar exertions. The life of Columbanus stands as the prototype of missionary activity in Europe, followed by such men as Kilian, Vergilius of Salzburg, Donatus of Fiesole, Wilfrid, Willibrord, Suitbert of Kaiserwerdt, Boniface, and Ursicinus of Saint-Ursanne.
Miracles.
The following are the principal miracles attributed to his intercession:
Jonas relates the occurrence of a miracle during Columbanus' time in Bregenz, when that region was experiencing a period of severe famine.
Legacy.
Historian Alexander O'Hara states that Columbanus had a "very strong sense of Irish identity ... He's the first person to write about Irish identity, he's the first Irish person that we have a body of literary work from, so even on that point of view he’s very important in terms of Irish identity." In 1950 a congress celebrating the 1,400th anniversary of his birth took place in Luxeuil, France. It was attended by Robert Schuman, Seán MacBride, the future Pope John XXIII, and John A. Costello who said "All statesmen of today might well turn their thoughts to St Columban and his teaching. History records that it was by men like him that civilisation was saved in the 6th century."
Columbanus is also remembered as the first Irish person to be the subject of a biography. An Italian monk named Jonas of Bobbio wrote a biography of him some twenty years after Columbanus' death. His use of the phrase in 600 AD (all of Europe) in a letter to Pope Gregory the Great is the first known use of the expression.
At Saint-Malo in Brittany, there is a granite cross bearing Columbanus's name to which people once came to pray for rain in times of drought. The nearby village of Saint-Coulomb commemorates him in name.
In France, the ruins of Columbanus' first monastery at Annegray are legally protected through the efforts of the Association Internationale des Amis de St Columban, which purchased the site in 1959. The association also owns and protects the site containing the cave, which served as Columbanus' cell, and the holy well that he created nearby. At Luxeuil-les-Bains, the Basilica of Saint Peter stands on the site of Columbanus' first church. A statue near the entrance, unveiled in 1947, shows him denouncing the immoral life of King Theuderic II. Formally an abbey church, the basilica contains old monastic buildings, which have been used as a minor seminary since the nineteenth century. It is dedicated to Columbanus and houses a bronze statue of him in its courtyard.
Luxeuil Abbey, described in the "Catholic Encyclopedia" as "the nursery of saints and apostles", produced sixty-three apostles who carried his rule, together with the Gospel, into France, Germany, Switzerland, and Italy. These disciples of Columbanus are credited with founding more than a hundred different monasteries. The canton and town still bearing the name of St. Gallen testify to how well one of his disciples succeeded.
Bobbio Abbey became a renowned center of learning in the Early Middle Ages, so famous that it rivaled the monastic community at Monte Cassino in wealth and prestige. St. Attala continued St. Columbanus' work at Bobbio, proselytizing and collecting religious texts for the abbey's library. In Lombardy, San Colombano al Lambro in Milan, San Colombano Belmonte in Turin, and San Colombano Certénoli in Genoa all take their names from the saint.
In 2024, the XXV International Meeting of Columban Associations for the "Columban’s Day 2024" took place in Piacenza, Italy. The Holy Father said Columbanus enhanced the Catholic Church. "The life and labours of the Columban monks proved decisive for the preservation and renewal of European culture", he said.
The Missionary Society of Saint Columban, founded in 1916, and the Missionary Sisters of St. Columban, founded in 1924, are both dedicated to Columbanus.
Veneration.
The remains of Columbanus are preserved in the crypt at Bobbio Abbey. Many miracles have been credited to his intercession. In 1482, the relics were placed in a new shrine and laid beneath the altar of the crypt. The sacristy at Bobbio possesses a portion of the skull of Columbanus, his knife, wooden cup, bell, and an ancient water vessel, formerly containing sacred relics and said to have been given to him by Pope Gregory I. According to some authorities, twelve teeth of Columbanus were taken from the tomb in the fifteenth century and kept in the treasury, but these have since disappeared.
Columbanus is named in the "Roman Martyrology" on 23 November, which is his feast day in Ireland. His feast is observed by the Benedictines on 21 November. In art, Columbanus is represented bearded, bearing the monastic cowl, holding in his hand a book with an Irish satchel, and standing in the midst of wolves. Sometimes he is depicted in the attitude of taming a bear, or with sun-beams over his head.
The Bishop of Hereford, John Oliver, suggested Columbanus as a patron of motorcyclists because of his extensive travels through Europe during his lifetime. His patronage was declared by the Vatican in 2002.
|
6503
|
7903804
|
https://en.wikipedia.org/wiki?curid=6503
|
Concord, New Hampshire
|
Concord () is the capital city of the U.S. state of New Hampshire and the seat of Merrimack County. As of the 2020 United States census the population was 43,976, making it the 3rd most populous city in New Hampshire after Manchester and Nashua.
The area was first settled by Europeans in 1659. On January 17, 1725, the Province of Massachusetts Bay granted the Concord area as the Plantation of Penacook, and it was incorporated on February 9, 1734, as the Town of Rumford. Governor Benning Wentworth gave the city its current name in 1765 following a boundary dispute with the neighboring town of Bow; the name was meant to signify the new harmony between the two towns. In 1808, Concord was named the official seat of state government, and the State House was completed in 1819; it remains the oldest U.S. state capitol wherein the legislature meets in its original chambers.
Concord is entirely within the Merrimack River watershed and the city is centered on the river. The Merrimack runs from northwest to southeast through the city. The city's eastern boundary is formed by the Soucook River, which separates Concord from the town of Pembroke. The Turkey River passes through the southwestern quarter of the city. The city consists of its downtown, including the North End and South End neighborhoods, along with the four villages of Penacook, Concord Heights, East Concord, and West Concord. Penacook sits along the Contoocook River, just before it flows into the Merrimack.
As of 2020, the top employer in the city was the State of New Hampshire, and the largest private employer was Concord Hospital. Concord is home to the University of New Hampshire School of Law, New Hampshire's only law school; St. Paul's School, a private preparatory school; NHTI, a two-year community college; the New Hampshire Police Academy; and the New Hampshire Fire Academy. Concord's Old North Cemetery is the final resting place of Franklin Pierce, 14th President of the United States.
Interstate 89 and Interstate 93 are the two main interstate highways serving the city, and general aviation access is via Concord Municipal Airport. The nearest airport with commercial air service is Manchester–Boston Regional Airport, to the south. There has been no passenger rail service to Concord since 1981. Historically, the Boston and Maine Railroad served the city.
History.
The area that would become Concord was originally settled thousands of years ago by Abenaki Native Americans called the Pennacook. The tribe fished for migrating salmon, sturgeon, and alewives with nets strung across the rapids of the Merrimack River. The stream was also the transportation route for their birch bark canoes, which could travel from Lake Winnipesaukee to the Atlantic Ocean. The broad sweep of the Merrimack River valley floodplain provided good soil for farming beans, gourds, pumpkins, melons and maize.
The area was first settled by Europeans in 1659 as Penacook, after the Abenaki word "pannukog" meaning "bend in the river," referencing the steep bends of the Merrimack River through the area. On January 17, 1725, the Province of Massachusetts Bay, which then claimed territories west of the Merrimack, granted the Concord area as the Plantation of Penacook. It was settled between 1725 and 1727 by Captain Ebenezer Eastman and others from Haverhill, Massachusetts. On February 9, 1734, the town was incorporated as "Rumford", from which Sir Benjamin Thompson, Count Rumford, would take his title. It was renamed "Concord" in 1765 by Governor Benning Wentworth following a bitter boundary dispute between Rumford and the town of Bow; the city name was meant to reflect the new concord, or harmony, between the disputant towns. Citizens displaced by the resulting border adjustment were given land elsewhere as compensation. In 1779, New Pennacook Plantation was granted to Timothy Walker Jr. and his associates at what would be incorporated in 1800 as Rumford, Maine, the site of Pennacook Falls.
Concord grew in prominence throughout the 18th century, and some of the earliest houses from this period survive at the northern end of Main Street. In the years following the Revolution, Concord's central geographical location made it a logical choice for the state capital, particularly after Samuel Blodget in 1807 opened a canal and lock system to allow vessels passage around the Amoskeag Falls downriver, connecting Concord with Boston by way of the Middlesex Canal. In 1808, Concord was named the official seat of state government, and in 1816 architect Stuart Park was commissioned to design a new capitol building for the state legislature on land sold to the state by local Quakers. Construction on the State House was completed in 1819, and it remains the oldest capitol in the nation in which the state's legislative branches meet in their original chambers. Concord was also named the seat of Merrimack County in 1823, and the Merrimack County Courthouse was constructed in 1857 in the North End at the site of the Old Town House.
In the early 19th century, much of the city's economy was dominated by furniture-making, printing, and granite quarrying; granite had become a popular building material for many monumental halls in the early United States, and Concord granite was used in the construction of both the New Hampshire State House and the Library of Congress in Washington, D.C. In 1828, Lewis Downing joined J. Stephens Abbot to form Abbot and Downing. Their most famous product was their Concord coach, widely used in the development of the American West, and their enterprise largely boosted and changed the city economy in the mid-19th century. In subsequent years, Concord would also become a hub for the railroad industry, with Penacook a textile manufacturing center using water power from the Contoocook River. The city also around this time started to become a center for the emerging healthcare industry, with New Hampshire State Hospital opening in 1842 as one of the first psychiatric hospitals in the United States. The State Hospital continued to expand throughout the following decades, and in 1891 Concord Hospital opened its doors as Margaret Pillsbury General Hospital, the first general hospital in the state of New Hampshire.
Concord's economy changed once again in the 20th century with the declining railroad and textile industry. The city developed into a center for national politics due to New Hampshire's first-in-the-nation primary, and many presidential candidates still visit the Concord area during campaign season. The city also developed an identity within the emerging space industry, with the McAuliffe-Shepard Discovery Center opening in 1990 to commemorate Alan Shepard, the first American in space from nearby Derry, and Christa McAuliffe, a teacher at Concord High School who died in the 1986 Space Shuttle "Challenger" disaster. Today, Concord remains a center for politics, law, healthcare, and insurance companies.
Geography.
Concord is located in south-central New Hampshire at (43.2070, −71.5371). It is north of the Massachusetts border, west of the Maine border, east of the Vermont border, and south of the Canadian border at Pittsburg.
According to the United States Census Bureau, the city has a total area of . of it are land and of it are water, comprising 4.81% of the city. Concord is drained by the Merrimack River. Penacook Lake, the largest lake in the city and its main source of water, is in the west. The highest point in Concord is above sea level on Oak Hill, just west of the hill's summit in neighboring Loudon.
Concord lies fully within the Merrimack River watershed and is centered on the river, which runs from northwest to southeast through the city. Downtown is located on a low terrace to the west of the river, with residential neighborhoods climbing hills to the west and extending southwards towards the town of Bow. To the east of the Merrimack, atop a bluff, is a flat, sandy plain known as Concord Heights, which has seen most of the city's commercial development since 1960. The eastern boundary of Concord (with the town of Pembroke) is formed by the Soucook River, a tributary of the Merrimack. The Turkey River winds through the southwestern quarter of the city, passing through the campus of St. Paul's School before entering the Merrimack River in Bow. In the northern part of the city, the Contoocook River enters the Merrimack at the village of Penacook.
Concord is north of Manchester, New Hampshire's largest city, and north of Boston.
Villages.
The city of Concord is made up of its downtown, including its North End and South End neighborhoods, plus the four distinct villages of Penacook, Concord Heights, East Concord, and West Concord.
Climate.
Concord, as with much of New England, is within the humid continental climate zone (Köppen "Dfb"), with long, cold, snowy winters, warm (and at times humid) summers, and relatively brief autumns and springs. In winter, successive storms deliver moderate to at times heavy snowfall amounts, contributing to the relatively reliable snow cover. In addition, lows reach below on an average 15 nights per year, and the city straddles the border between USDA Hardiness Zone 5b and 6a. However, thaws are frequent, with one to three days per month with + highs from December to February. Summer can bring stretches of humid conditions as well as thunderstorms, and there is an average of 12 days of + highs annually. The window for freezing temperatures on average begins on September 27 and expires on May 14.
The monthly daily average temperature range from in January to in July. Temperature extremes have ranged from in February 1943 to in July 1966.
Demographics.
As of the 2020 United States census, there were 43,976 people residing in the city. The population density was . At the 2010 Census there were 42,695 residents and 10,052 families in the city, as well as 18,852 housing units at an average density of . The racial makeup of the city in 2020 was 84.5% White, 4.9% Black or African American, 1.0% Native American, 4.9% Asian, 0.1% Pacific Islander, 0.4% from some other race, and 1.8% from two or more races. 4.9% of the population were Hispanic or Latino of any race.
In 2010 there were 17,592 households, out of which 28.7% had children under the age of 18 living with them, 41.3% were headed by married couples living together, 11.6% had a female householder with no husband present, and 42.9% were non-families. 33.6% of all households were made up of individuals, and 12.0% were someone living alone who was 65 years of age or older. The average household size was 2.26, and the average family size was 2.90.
In the city, the population was spread out, with 20.7% under the age of 18, 9.3% from 18 to 24, 28.0% from 25 to 44, 28.2% from 45 to 64, and 13.8% who were 65 years of age or older. The median age was 39.4 years. For every 100 females, there were 98.5 males. For every 100 females age 18 and over, there were 96.9 males.
For the period 2019–2023, the median annual income for a household in the city was $83,701. The per capita income for the city was $45,420. About 8.7% of those in Concord were below the poverty line during 2019–2023.
The most reported ancestries in 2020 were:
Economy.
Top employers.
In 2020, the top employer in the city remained the State of New Hampshire, with over 6,000 employed workers, while the largest private employer was Concord Hospital, with just under 3,000 employees. According to the City of Concord's Comprehensive Annual Financial Report, the top 10 employers in the city for the Fiscal Year 2020 were:
Transportation.
Highways.
Interstate 89 and Interstate 93 are the two main interstate highways serving Concord, and join just south of the city limits. Interstate 89 links Concord with Lebanon and the state of Vermont to the northwest, while Interstate 93 connects the city to Plymouth, Littleton, and the White Mountains to the north and Manchester and Boston to the south. Interstate 393 is a spur highway leading east from Concord and merging with U.S. Route 4 as a direct route to New Hampshire's Seacoast region. North-south U.S. Route 3 serves as Concord's Main Street, while U.S. Route 202 and New Hampshire Route 9 cross the city from east to west. State routes 13 and 132 also serve the city: Route 13 leads southwest out of Concord towards Goffstown and Milford, while Route 132 travels north parallel to Interstate 93. New Hampshire Route 106 passes through the easternmost part of Concord, crossing I-393 and NH 9 before crossing the Soucook River south into the town of Pembroke. To the north, NH 106 leads to Loudon, Belmont and Laconia.
Railroads.
Historically, Concord served as an important railroad terminal and station for the Boston and Maine Railroad. The former Concord Station was located at what is now a Burlington department store on Storrs Street. The station itself was built in 1860, but the fourth and most famous iteration of the station was built in 1885, which had a brick head house designed by Bradford L. Gilbert. The head house was demolished in 1959 and replaced by a smaller "McGinnis Era" station. By 1967, all passenger rail services to Concord had been discontinued. For 13 months in 1980 and 1981, MBTA Commuter Rail ran two round trips a day between Boston and Concord. Since then, there has not been any passenger rail service to Concord.
In 2021, Amtrak announced their plan to implement new service between Boston and Concord by 2035.
Bus.
Local bus service is provided by Concord Area Transit (CAT), with three routes through the city. Regional bus service provided by Concord Coach Lines and Greyhound Lines is available from the Concord Transportation Center at 30 Stickney Avenue next to Exit 14 on Interstate 93, with service south to Boston and points in between, as well as north to Littleton and northeast to Berlin.
Other modes.
General aviation services are available through Concord Municipal Airport, located east of downtown. There is no commercial air service within the city limits; the nearest such airport is Manchester–Boston Regional Airport, to the south.
Complete Streets Improvement Project.
Concord's downtown underwent a significant renovation between 2015 and 2016, during the city's "Complete Streets Improvement Project". At a proposed cost of $12 million, the project promised to deliver on categories of maintenance to aging infrastructure, improved accessibility, increased sustainability, a safer experience for walkers, bikers and motorists alike, and to stimulate economic growth in an increasingly idle downtown. The main infrastructural change was reducing the four-lane street (two in each direction) to two lanes plus a turning lane in the center. The freed-up space would contribute to extra width for bikes to ride in either direction, increased curb size and an added median where there is no need for a turning lane. Concord opted to add shared lane markings for bikes, rather than a dedicated protected bike lane.
By adding curb space, this project created new opportunities for pedestrians to enjoy the downtown. Many power lines were buried, and street trees, colorful benches, art installations, and other green spaces were added, all allowing people to reclaim a space long dominated by cars. Main Street underwent serious traffic calming, including a road diet, increased diagonal parking, widening sidewalks, adding shared lane markings, adding trees, texturing medians and coloring crosswalks red. Another aspect of the new construction was adding heated sidewalk capabilities, utilizing excess steam from the local Concord Steam plant, and minimizing sand and snow blowing needed during the winter months.
Funding for Complete Streets came from a combination of $4,710,000 from a USDOT TIGER grant and the rest from the City of Concord. The project was initially proposed as costing $7,850,000, but ran over budget due to overambitious ideas. After scrapping some of the most expensive offenders, the budget ended up at $14.2 million, with the project actually coming in $1.1 million below that. Although adding final aesthetic touches with the extra money were debated, the city council ended up deciding to save for financially straining years ahead. The design was carried out by McFarland Johnson, IBI Group, and City of Concord Engineering.
Government.
Concord is governed via the council-manager system. The city council consists of a mayor and 14 councilors, ten of which are elected to two-year terms representing each of the city wards, while the other four are elected at-large to four-year terms. The mayor is elected directly every two years. The current mayor as of 2024 is Byron Champlin, who was elected on November 7, 2023, with more than 75% of the vote.
According to the Concord city charter, the mayor chairs the council, however has very few formal powers over the day-to-day management of the city. The actual operations of the city are overseen by the city manager, currently Thomas J. Aspell Jr. The current police chief is Bradley S. Osgood.
In the New Hampshire Senate, Concord is in the 15th District, represented by Democrat Becky Whitley since December 2020. On the New Hampshire Executive Council, Concord is in the 2nd District, represented by Cinde Warmington, the sole Democrat on the council. In the United States House of Representatives, Concord is in New Hampshire's 2nd congressional district, represented by Democrat Maggie Goodlander.
New Hampshire Department of Corrections operates the New Hampshire State Prison for Men and New Hampshire State Prison for Women in Concord.
Concord leans strongly Democratic in presidential elections; the last Republican nominee to carry the city was then Vice President George H. W. Bush in 1988. Voter turnout was 72.7% in the 2020 general election, down from 76.2% in 2016, but still above the 2020 national turnout of 66.7%.
Media.
Newspapers and journals
Radio
The city is otherwise served by . New Hampshire Public Radio is headquartered in Concord.
Television
Sites of interest.
The New Hampshire State House, designed by architect Stuart Park and constructed between 1815 and 1818, is the oldest state house in which the legislature meets in its original chambers. The building was remodeled in 1866, and the third story and west wing were added in 1910.
Across from the State House is the Eagle Hotel on Main Street, which has been a downtown landmark since its opening in 1827. U.S. Presidents Ulysses S. Grant, Rutherford Hayes, and Benjamin Harrison all dined there, and Franklin Pierce spent the night before departing for his inauguration. Other well-known guests included Jefferson Davis, Charles Lindbergh, Eleanor Roosevelt, Richard M. Nixon (who carried New Hampshire in all three of his presidential bids), and Thomas E. Dewey. The hotel closed in 1961.
South from the Eagle Hotel on Main Street is Phenix Hall, which replaced "Old" Phenix Hall, which burned in 1893. Both the old and new buildings featured multi-purpose auditoriums used for political speeches, theater productions, and fairs. Abraham Lincoln spoke at the old hall in 1860; Theodore Roosevelt, at the new hall in 1912.
North on Main Street is the Walker-Woodman House, also known as the Reverend Timothy Walker House, the oldest standing two-story house in Concord. It was built for the Reverend Timothy Walker between 1733 and 1735.
On the north end of Main Street is the Pierce Manse, in which President Franklin Pierce lived in Concord before and following his presidency. The mid-1830s Greek Revival house was moved from Montgomery Street to North Main Street in 1971 to prevent its demolition.
Beaver Meadow Golf Course, located in the northern part of Concord, is one of the oldest golf courses in New England. Besides this golf course, other important sporting venues in Concord include Everett Arena and Memorial Field.
The SNOB (Somewhat North Of Boston) Film Festival, started in the fall of 2002, brings independent films and filmmakers to Concord and has provided an outlet for local filmmakers to display their films. SNOB Film Festival was a catalyst for the building in 2007 of Red River Theatres, a locally owned, nonprofit, independent cinema named after the 1948 film featuring John Wayne.
Other sites of interest include the Capitol Center for the Arts, the New Hampshire Historical Society, which has two facilities in Concord, and the McAuliffe-Shepard Discovery Center, a science museum named after Christa McAuliffe, the Concord teacher who died during the Space Shuttle Challenger disaster in 1986, and Alan Shepard, the Derry-born astronaut who was the second person and first American in space as well as the fifth and oldest person to walk on the Moon.
Education.
Public schools.
Concord's public schools are within the Concord School District, except for schools in the Penacook area of the city, which are within the Merrimack Valley School District, a district which also includes several towns north of Concord. The only public high school in the Concord School District is Concord High School, which had about 1,450 students as of Fall 2023. The only public middle school in the Concord School District is Rundlett Middle School, which had roughly 770 students as of Fall 2023. Concord School District's elementary schools underwent a major re-configuration in 2012, with three newly constructed schools opening and replacing six previous schools. Kimball School and Walker School were replaced by Christa McAuliffe School on the Kimball School site, Conant School (and Rumford School, which closed a year earlier) were replaced by Abbot-Downing School at the Conant site, and Eastman and Dame schools were replaced by Mill Brook School, serving kindergarten through grade two, located next to Broken Ground Elementary School, serving grades three to five. Beaver Meadow School, the remaining elementary school, was unaffected by the changes.
Concord schools in the Merrimack Valley School District include Merrimack Valley High School and Merrimack Valley Middle School, which are adjacent to each other and to Rolfe Park in Penacook village, and Penacook Elementary School, just south of the village.
Private and charter schools.
Concord has two parochial schools, Bishop Brady High School and Saint John Regional School.
Other area private schools include Concord Christian Academy, Parker Academy, Trinity Christian School, and Shaker Road School. Also in Concord is St. Paul's School, a boarding school located in the city's West End neighborhood.
Post-secondary schools.
Concord is home to New Hampshire Technical Institute, the city's primary community college, and Granite State College, which offers online two-year and four-year degrees. The University of New Hampshire School of Law is located near downtown, and the Franklin Pierce University Doctorate of Physical Therapy program also has a location in the city. Concord Hospital recently announced plans to open a joint program with the New England College School of Nursing as part of their Bachelor of Nursing degree. Concord is also a major clinical site of Dartmouth College's Geisel School of Medicine, New Hampshire's only medical school.
|
6505
|
7903804
|
https://en.wikipedia.org/wiki?curid=6505
|
Chlorophyceae
|
The Chlorophyceae, also known as chlorophycean algae, are one of the classes of green algae, within the phylum Chlorophyta. They are a large assemblage of mostly freshwater and terrestrial organisms; many members are important primary producers in the ecosystems they inhabit. Their body plans are diverse and range from single flagellated or non-flagellated cells to colonies or filaments of cells. The class Chlorophyceae has been distinguished on the basis of ultrastructural morphology; molecular traits are also being used to classify taxa within the class.
Description.
Chlorophycean algae are eukaryotic organisms composed of cells which occur in a variety of forms. Depending on the species, Chlorophyceae can grow unicellular (e.g. "Chlamydomonas)", colonial (e.g. "Volvox"), coenocytic (e.g. "Characiosiphon"), or filamentous (e.g. "Chaetophora"). In their vegetative state, some members have flagella while others produce them only in reproductive stages; still others never produce flagella.
Chloroplasts.
Chlorophycean algae have chloroplasts and nearly all members are photosynthetic. There are a few exceptions, such as "Polytoma", which have plastids that have lost the ability to photosynthesize. They are usually green due to the presence of chlorophyll "a" and "b"; they can also contain the pigment beta-carotene. Chloroplasts are diverse in morphology and include many forms, including, cup-shaped (e.g. "Chlamydomonas"), or axial, or parietal and reticulate (e.g. "Oedogonium").
In many species, there may be one or more storage bodies called pyrenoids (central proteinaceous body covered with a starch sheath) that are localised around the chloroplast. Some algae may also store food in the form of oil droplets. The inner cell wall layer is made of cellulose and the outer layer of pectose.
Ultrastructure.
Cells of Chlorophyceae usually have two or four flagella, but in some cases may have numerous flagella. The flagella emerge from the apex of the cell, and are connected to the nucleus via rhizoplasts. The arrangement of flagella may be in one of two configurations, termed CW ("clockwise") or DO ("directly opposed"). In the CW configuration, the basal bodies are arranged clockwise in the 1–7 o'clock position. In the DO configuration, the basal bodies are arranged in 12–6 o'clock. Taxa with the CW arrangement and DO arrangement correspond to two different clades, roughly corresponding to the orders Chlamydomonadales and Sphaeropleales, respectively.
A combination of ultrastructural features are characteristic of the Chlorophyceae. These include: closed mitosis, the telophase spindle collapsing before cytokinesis, and a system of microtubules called a phycoplast running parallel to the plane of cytokinesis.
Reproduction.
Chlorophyceae can reproduce both asexually and sexually. In asexual reproduction, cells may produce autospores, aplanospores or zoospores. Autospores (by definition) lack flagella and appear as smaller versions of vegetative cells. Zoospores typically have an elongate, hydrodynamic shape and often have eyespots. Aplanospores are similar to zoospores in that they have characteristics typical of zoospores (such as contractile vacuoles), but lack flagella.
In addition to normal asexual reproduction, some genera such as "Chlamydomonas" and "Dunaliella" can go through a temporary phase known as the "palmella stage", in which flagella are absent and the cells divide vegetatively within a common mucilaginous envelope. Algae enter the palmella stage in response to stressful conditions, such as changes in salinity or predation. Additionally, "Haematococcus" produces resistant stages with thick cell walls, termed akinetes.
Sexual reproduction shows considerable variation in the type and formation of sex cells; it may be isogamous (with two morphologically identical gamete types), anisogamous (with two morphologically distinct gamete types), and oogamous (with larger, nonmotile eggs and smaller motile sperm cells). Members of Chlorophyceae that undergo sexual reproduction have a zygotic life cycle, in which the zygotes are the only diploid stages. Zygotes may have thick and/or spiny cell walls; these are called hypnozygotes and they also function as resting stages.
They share many similarities with higher plants, including the presence of asymmetrical flagellated cells, the breakdown of the nuclear envelope at mitosis, and the presence of phytochromes, flavonoids, and the chemical precursors to the cuticle. However, unlike higher plants they do not go through a multicellular alternation of generations.
Taxonomy.
The current taxonomy of algae is based on molecular phylogenetics. Older classifications are simpler and more morphologically aligned; however, these classifications are recognized as artificial due to the extensive morphological convergence present within the class (and more broadly within algae). In even older, historical classifications, the term Chlorophyceae is sometimes used to apply to all the green algae except the Charales, and the internal division is considerably different.
, AlgaeBase accepted the following orders in the class Chlorophyceae:
Along with these genera, AlgaeBase recognizes several taxa that are incertae sedis (i.e. unplaced to an order):
Other orders that have been recognized include:
Phylogeny.
Current thinking of phylogenetic relationships are as follows:
|
6508
|
4007668
|
https://en.wikipedia.org/wiki?curid=6508
|
Cyril
|
Cyril (also Cyrillus or Cyryl) is a masculine given name. It is derived from the Greek name ("Kýrillos"), meaning 'lordly, masterful', which in turn derives from Greek ("kýrios") 'lord'. There are various variant forms of the name "Cyril" such as "Cyrill", "Cyrille", "Ciril", "Kirill", "Kiryl", "Kirillos", "Kyrylo", "Kiril", "Kiro", "Kyril", "Kyrill" and "Quirrel".
It may also refer to:
|
6511
|
39541744
|
https://en.wikipedia.org/wiki?curid=6511
|
Computational complexity
|
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.
The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.
As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function , where is the size of the input and is either the worst-case complexity (the maximum of the amount of resources that are needed over all inputs of size ) or the average-case complexity (the average of the amount of resources over all inputs of size ). Time complexity is generally expressed as the number of required elementary operations on an input of size , where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size .
Resources.
Time.
The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity.
The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on "any" computer. This is achieved by counting the number of "elementary operations" that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called "steps".
Bit complexity.
Formally, the "bit complexity" refers to the number of operations on bits that are needed for running an algorithm. With most models of computation, it equals the time complexity up to a constant factor. On computers, the number of operations on machine words that are needed is also proportional to the bit complexity. So, the "time complexity" and the "bit complexity" are equivalent for realistic models of computation.
Space.
Another important resource is the size of computer memory that is needed for running algorithms.
Communication.
For the class of distributed algorithms that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity. It is the necessary amount of communication between the executing parties.
Others.
The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.
For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a integer matrix is formula_1 for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in , because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to .
In sorting and searching, the resource that is generally considered is the number of entry comparisons. This is generally a good measure of the time complexity if data are suitably organized.
Complexity as a function of input size.
It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity generally increases with the size of the input, the complexity is typically expressed as a function of the size (in bits) of the input, and therefore, the complexity is a function of . However, the complexity of an algorithm may vary dramatically for different inputs of the same size. Therefore, several complexity functions are commonly used.
The worst-case complexity is the maximum of the complexity over all inputs of size , and the average-case complexity is the average of the complexity over all inputs of size (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered.
Asymptotic complexity.
It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of , and this makes that, for small , the ease of implementation is generally more interesting than a low complexity.
For these reasons, one generally focuses on the behavior of the complexity for large , that is on its asymptotic behavior when tends to the infinity. Therefore, the complexity is generally expressed by using big O notation.
For example, the usual algorithm for integer multiplication has a complexity of formula_2 this means that there is a constant formula_3 such that the multiplication of two integers of at most digits may be done in a time less than formula_4 This bound is "sharp" in the sense that the worst-case complexity and the average-case complexity are formula_5 which means that there is a constant formula_6 such that these complexities are larger than formula_7 The radix does not appear in these complexity, as changing of radix changes only the constants formula_3 and formula_9
Models of computation.
The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, it is generally implicitely assumed as being a multitape Turing machine, since several more realistic models of computation, such as random-access machines are asymptotically equivalent for most problems. It is only for very specific and difficult problems, such as integer multiplication in time formula_10 that the explicit definition of the model of computation is required for proofs.
Deterministic models.
A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers.
When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence.
Non-deterministic computation.
In a non-deterministic model of computation, such as non-deterministic Turing machines, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (: 21 = 3 × 7).
Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine. A problem is NP-complete if, roughly speaking, it is in NP and is not easier than any other NP problem. Many combinatorial problems, such as the Knapsack problem, the travelling salesman problem, and the Boolean satisfiability problem are NP-complete. For all these problems, the best known algorithm has exponential complexity. If any one of these problems could be solved in polynomial time on a deterministic machine, then all NP problems could also be solved in polynomial time, and one would have P = NP. it is generally conjectured that with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!) for interesting lengths of input.
Parallel and distributed computation.
Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed computing, the data transmission is done through a network and is therefore much slower.
The time needed for a computation on processors is at least the quotient by of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor.
The main complexity problem is thus to design algorithms such that the product of the computation time by the number of processors is as close as possible to the time needed for the same computation on a single processor.
Quantum computing.
A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer. This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer.
Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers.
Problem complexity (lower bounds).
The complexity of a problem is the infimum of the complexities of the algorithms that may solve the problem, including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems.
It follows that every complexity of an algorithm, that is expressed with big O notation, is also an upper bound on the complexity of the corresponding problem.
On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds.
For solving most problems, it is required to read all input data, which, normally, needs a time proportional to the size of the data. Thus, such problems have a complexity that is at least linear, that is, using big omega notation, a complexity formula_11
The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of polynomial equations of degree in indeterminates may have up to formula_12 complex solutions, if the number of solutions is finite (this is Bézout's theorem). As these solutions must be written down, the complexity of this problem is formula_13 For this problem, an algorithm of complexity formula_14 is known, which may thus be considered as asymptotically quasi-optimal.
A nonlinear lower bound of formula_15 is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is formula_16 This lower bound results from the fact that there are ways of ordering objects. As each comparison splits in two parts this set of orders, the number of of comparisons that are needed for distinguishing all orders must verify formula_17 which implies formula_18 by Stirling's formula.
A standard method for getting lower bounds of complexity consists of "reducing" a problem to another problem. More precisely, suppose that one may encode a problem of size into a subproblem of size of a problem , and that the complexity of is formula_19 Without loss of generality, one may suppose that the function increases with and has an inverse function . Then the complexity of the problem is formula_20 This is the method that is used to prove that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is formula_21 for every positive integer .
Use in algorithm design.
Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected.
It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require formula_22 comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only formula_23 comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For , this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second.
Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.
|
6512
|
11055690
|
https://en.wikipedia.org/wiki?curid=6512
|
Coercion
|
Coercion involves compelling a party to act in an involuntary manner through the use of threats, including threats to use force against that party. It involves a set of forceful actions which violate the free will of an individual in order to induce a desired response. These actions may include extortion, blackmail, or even torture and sexual assault. Common-law systems codify the act of violating a law while under coercion as a duress crime.
Coercion used as leverage may force victims to act in a way contrary to their own interests. Coercion can involve not only the infliction of bodily harm, but also psychological abuse (the latter intended to enhance the perceived credibility of the threat). The threat of further harm may also lead to the acquiescence of the person being coerced. The concepts of coercion and persuasion are similar, but various factors distinguish the two. These include the intent, the willingness to cause harm, the result of the interaction, and the options available to the coerced party.
Political authors such as John Rawls, Thomas Nagel, and Ronald Dworkin contend whether governments are inherently coercive. In 1919, Max Weber (1864–1920), building on the view of Ihering (1818–1892), defined a state as "a human community that (successfully) claims a monopoly on the legitimate use of physical force". Morris argues that the state can operate through incentives rather than coercion. Healthcare systems may use informal coercion to make a patient adhere to a doctor's treatment plan. Under certain circumstances, medical staff may use physical coercion to treat a patient involuntarily., a practice which raises ethical concerns. Such practices has also been shown to cause moral distress among healthcare staff, especially when staff attitudes toward coercive measures are negative. To minimize the need for coercion in psychiatric care, various models such as "Safewards" and "Six Core Strategies" have been implemented with promising results.
Overview.
The purpose of coercion is to substitute one's aims with weaker ones that the aggressor wants the victim to have. For this reason, many social philosophers have considered coercion as the polar opposite to freedom. Various forms of coercion are distinguished: first on the basis of the "kind of injury" threatened, second according to its "aims" and "scope", and finally according to its "effects", from which its legal, social, and ethical implications mostly depend.
Physical.
Physical coercion is the most commonly considered form of coercion, where the content of the conditional threat is the use of force against a victim, their relatives or property. An often used example is "putting a gun to someone's head" ("at gunpoint") or putting a "knife under the throat" ("at knifepoint" or cut-throat) to compel action under the threat that non-compliance may result in the attacker harming or even killing the victim. These are so common that they are also used as metaphors for other forms of coercion.
Armed forces in many countries use firing squads to maintain discipline and intimidate the masses, or opposition, into submission or silent compliance. However, there also are nonphysical forms of coercion, where the threatened injury does not immediately imply the use of force. Byman and Waxman (2000) define coercion as "the use of threatened force, including the limited use of actual force to back up the threat, to induce an adversary to behave differently than it otherwise would." Coercion does not in many cases amount to destruction of property or life since compliance is the goal.
|
6513
|
46628330
|
https://en.wikipedia.org/wiki?curid=6513
|
Client–server model
|
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share its computing resources, but it requests content or service from a server and may share its own content as part of the request. Clients, therefore, initiate communication sessions with servers, which await incoming requests.
Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.
Client and server role.
The server component provides a function or service to one or many clients, which initiate requests for such services.
Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a "service".
Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called "inter-server" or "server-to-server" communication.
Client and server communication.
Generally, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevant application protocol, i.e. the content and the formatting of the data for the requested service.
Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange.
A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates.
Encryption should be applied if sensitive information is to be communicated between the client and the server.
Example.
When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials are compared against a database, and the webserver accesses that database server as a client. An application server interprets the returned data by applying the bank's business logic and provides the output to the webserver. Finally, the webserver returns the result to the client web browser for display.
In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete.
This example illustrates a design pattern applicable to the client–server model: separation of concerns.
Server-side.
Server-side refers to programs and operations that run on the server. This is in contrast to client-side programs and operations which run on the client.
General concepts.
"Server-side software" refers to a computer application, such as a web server, that runs on remote server hardware, reachable from a user's local computer, smartphone, or other device. Operations may be performed server-side because they require access to information or functionality that is not available on the client, or because performing such operations on the client side would be slow, unreliable, or insecure.
Client and server programs may be commonly available ones such as free or commercial web servers and web browsers, communicating with each other using standardized protocols. Or, programmers may write their own server, client, and communications protocol which can only be used with one another.
Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks.
Computer security.
In a computer security context, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, or in between the two. For example, an attacker might exploit an SQL injection vulnerability in a web application in order to maliciously change or gain unauthorized access to data in the server's database. Alternatively, an attacker might break into a server system using vulnerabilities in the underlying operating system and then be able to access database and other files in the same manner as authorized administrators of the server.
Examples.
In the case of distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application like Google Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc.
Web applications and services can be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use.
Client side.
Client-side refers to operations that are performed by the client in a computer network.
General concepts.
Typically, a client is a computer application, such as a web browser, that runs on a user's local computer, smartphone, or other device, and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk.
When the server serves data in a commonly used manner, for example according to standard protocols such as HTTP or FTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications, programmers may write their own server, client, and communications protocol which can only be used with one another.
Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations.
Computer security.
In a computer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on the server side, or in between the two. As an example, if a server contained an encrypted file or message which could only be decrypted using a key housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might cause malware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employ cross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware.
Examples.
Distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications like Google Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (a server-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home).
Early history.
An early form of client–server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output.
While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms "server-host" (or "serving host") and "user-host" (or "using-host"), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s.
One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet).
Client-host and server-host.
"Client-host" and "server-host" have subtly different meanings than "client" and "server". A host is any computer connected to a network. Whereas the words "server" and "client" may refer either to a computer or to a computer program, "server-host" and "client-host" always refer to computers. The host is a versatile, multifunction computer; "clients" and "servers" are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.
An early use of the word "client" occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). By 1992, the word "server" had entered into general parlance.
Centralized computing.
The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a rich client, such as a personal computer, has many resources and does not rely on a server for essential functions.
As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to rich clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s.
Comparison with peer-to-peer architecture.
In addition to the client-server model, distributed computing applications often use the peer-to-peer (P2P) application architecture.
In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine.
Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.
In a peer-to-peer network, two or more computers ("peers") pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client-server or client-queue-client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests.
Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.
|
6514
|
277086
|
https://en.wikipedia.org/wiki?curid=6514
|
County Dublin
|
County Dublin ( or ) is a county in Ireland, and holds its capital city, Dublin. It is located on the island's east coast, within the province of Leinster. Until 1994, County Dublin (excluding the city) was a single local government area; in that year, the county council was divided into three new administrative counties: Dún Laoghaire–Rathdown, Fingal and South Dublin. The three administrative counties together with Dublin City proper form a NUTS III statistical region of Ireland (coded IE061). County Dublin remains a single administrative unit for the purposes of the courts (including the Dublin County Sheriff, but excluding the bailiwick of the Dublin City Sheriff) and Dublin County combined with Dublin City forms the Judicial County of Dublin, including Dublin Circuit Court, the Dublin County Registrar and the Dublin Metropolitan District Court. Dublin also sees law enforcement (the Garda Dublin metropolitan division) and fire services (Dublin Fire Brigade) administered county-wide.
Dublin is Ireland's most populous county, with a population of 1,458,154 – approximately 28% of the Republic of Ireland's total population. Dublin city is the capital and largest city of the Republic of Ireland, as well as the largest city on the island of Ireland. Roughly 9 out of every 10 people in County Dublin lives within Dublin city and its suburbs. Several sizeable towns that are considered separate from the city, such as Rush, Donabate and Balbriggan, are located in the far north of the county. Swords, while separated from the city by a green belt around Dublin Airport, is considered a suburban commuter town and an emerging small city.
The third smallest county by land area, Dublin is bordered by Meath to the west and north, Kildare to the west, Wicklow to the south and the Irish Sea to the east. The southern part of the county is dominated by the Dublin Mountains, which rise to around and contain numerous valleys, reservoirs and forests. The county's east coast is punctuated by several bays and inlets, including Rogerstown Estuary, Broadmeadow Estuary, Baldoyle Bay and most prominently, Dublin Bay. The northern section of the county, today known as Fingal, varies enormously in character, from densely populated suburban towns of the city's commuter belt to flat, fertile plains, which are some of the country's largest horticultural and agricultural hubs.
Dublin is the oldest county in Ireland, and was the first part of the island to be shired following the Norman invasion in the late 1100s. While it is no longer a local government area, Dublin retains a strong identity, and continues to be referred to as both a region and county interchangeably, including at government body level.
Etymology.
County Dublin is named after the city of Dublin, which is an anglicisation of its Old Norse name . The city was founded in the 9th century AD by Viking settlers who established the Kingdom of Dublin. The Viking settlement was preceded by a Christian ecclesiastical site known as , from which took its name. derives from the Middle Irish word (literally "Blackpool"), from "black, dark" and "pool", referring to a dark tidal pool. This tidal pool was located where the River Poddle entered the Liffey, to the rear of Dublin Castle.
The hinterland of Dublin in the Norse period was named .
In addition to , a Gaelic settlement known as ('ford of hurdles') was located further up the Liffey, near present-day Father Mathew Bridge. means 'town of the hurdled ford', with referring to a fording point along the river. As with , an early Christian monastery was also located at , on the site that is currently occupied by the Whitefriar Street Carmelite Church.
Dublin was the first county in Ireland to be shired after the Norman Conquest in the late 12th century. The Normans captured the Kingdom of Dublin from its Norse-Gael rulers and the name was used as the basis for the county's official Anglo-Norman (and later English) name. However, in Modern Irish the region was named after the Gaelic settlement of or simply . As a result, Dublin is one of four counties in Ireland with a different name origin for both Irish and English – the others being Wexford, Waterford, and Wicklow, whose English names are also derived from Old Norse.
History.
The earliest recorded inhabitants of present-day Dublin settled along the mouth of the River Liffey. The remains of five wooden fish traps were discovered near Spencer Dock in 2007. These traps were designed to catch incoming fish at high tide and could be retrieved at low tide. Thin-bladed stone axes were used to craft the traps and radiocarbon dating places them in the Late Mesolithic period (–5,700 BCE).
The Vikings invaded the region in the mid-9th century AD and founded what would become the city of Dublin. Over time they mixed with the natives of the area, becoming Norse–Gaels. The Vikings raided across Ireland, Britain, France and Spain during this period and under their rule Dublin developed into the largest slave market in Western Europe. While the Vikings were formidable at sea, the superiority of Irish land forces soon became apparent, and the kingdom's Norse rulers were first exiled from the region as early as 902. Dublin was captured by the High King of Ireland, Máel Sechnaill II, in 980, who freed the kingdom's Gaelic slaves. Dublin was again defeated by Máel Sechnaill in 988 and forced to accept Brehon law and pay taxes to the High King. Successive defeats at the hands of Brian Boru in 999 and, most famously, at the Battle of Clontarf in 1014, relegated Dublin to the status of lesser kingdom.
In 1170, the ousted king of Leinster, Diarmait Mac Murchada, and his Norman allies agreed to capture Dublin at a war council in Waterford. They evaded the intercepting army of High King Ruaidrí Ua Conchobair by marching through the Wicklow Mountains, arriving outside the walls of Dublin in late September. The king of Dublin, Ascall mac Ragnaill, met with Mac Murchada for negotiations; however, while talks were ongoing, the Normans, led by de Cogan and FitzGerald, stormed Dublin and overwhelmed its defenders, forcing mac Ragnaill to flee to the Northern Isles. Separate attempts to retake Dublin were launched by both Ua Conchobair and mac Ragnaill in 1171, both of which were unsuccessful.
The authority over Ireland established by the Anglo-Norman king Henry II was gradually lost during the Gaelic resurgence from the 13th century onwards. English power diminished so significantly that by the early 16th century English laws and customs were restricted to a small area around Dublin known as "The Pale". The Earl of Kildare's failed rebellion in 1535 reignited Tudor interest in Ireland, and Henry VIII proclaimed the Kingdom of Ireland in 1542, with Dublin as its capital. Over the next 60 years the Tudor conquest spread to every corner of the island, which was fully subdued by 1603.
Despite harsh penal laws and unfavourable trade restrictions imposed upon Ireland, Dublin flourished in the 18th century. The Georgian buildings which still define much of Dublin's architectural landscape to this day were mostly built over a 50-year period spanning from about 1750 to 1800. Bodies such as the Wide Streets Commission completely reshaped the city, demolishing most of medieval Dublin in the process. During the Enlightenment, the penal laws were gradually repealed and members of the Protestant Ascendancy began to regard themselves as citizens of a distinct Irish nation. The Irish Patriot Party, led by Henry Grattan, agitated for greater autonomy from Great Britain, which was achieved under the Constitution of 1782. These freedoms proved short-lived, as the Irish Parliament was abolished under the Acts of Union 1800 and Ireland was incorporated into the United Kingdom. Dublin lost its political status as a capital and went into a marked decline throughout the 19th century, leading to widespread demands to repeal the union.
Although at one time the second city of the British Empire, by the late 1800s Dublin was one of the poorest cities in Europe. The city had the worst housing conditions of anywhere in the United Kingdom, and overcrowding, disease and malnourishment were rife within central Dublin. In 1901, "The Irish Times" reported that the disease and mortality rates in Calcutta during the 1897 bubonic plague outbreak compared "favourably with those of Dublin at the present moment". Most of the upper and middle class residents of Dublin had moved to wealthier suburbs, and the grand Georgian homes of the 1700s were converted en masse into tenement slums. In 1911, over 20,000 families in Dublin were living in one-room tenements which they rented from wealthy landlords. Henrietta Street was particularly infamous for the density of its tenements, with 845 people living on the street in 1911, including 19 families – totalling 109 people – living in just one house.
After decades of political unrest, Ireland appeared to be on the brink of civil war as a result of the Home Rule Crisis. Despite being the centre of Irish unionism outside of Ulster, Dublin was overwhelmingly in favour of Home Rule. Unionist parties had performed poorly in the county since the 1870s, leading contemporary historian W. E. H. Lecky to conclude that "Ulster unionism is the only form of Irish unionism that is likely to count as a serious political force". Unlike their counterparts in the north, "southern unionists" were a clear minority in the rest of Ireland, and as such were much more willing to co-operate with the Irish Parliamentary Party (IPP) to avoid partition. Following the Anglo-Irish Treaty, Belfast unionist Dawson Bates decried the "effusive professions of loyalty and confidence in the Provisional Government" that was displayed by former unionists in the new Irish Free State.
The question of Home Rule was put on hold due to the outbreak of the First World War but was never to be revisited as a series of missteps by the British government, such as executing the leaders of the 1916 Easter Rising and the Conscription Crisis of 1918, fuelled the Irish revolutionary period. The IPP were nearly wiped out by Sinn Féin in the 1918 general election and, following a brief war of independence, 26 of Ireland's 32 counties seceded from the United Kingdom in December 1922, with Dublin becoming the capital of the Irish Free State, and later the Republic of Ireland.
From the 1960s onwards, Dublin city greatly expanded due to urban renewal works and the construction of large suburbs such as Tallaght, Coolock and Ballymun, which resettled both the rural and urban poor of County Dublin in newer state-built accommodation. Dublin was the driving force behind Ireland's Celtic Tiger period, an era of rapid economic growth that started in the early 1990s. In stark contrast to the turn of the 20th century, Dublin entered the 21st century as one of Europe's richest cities, attracting immigrants and investment from all over the world.
Geography and subdivisions.
Dublin is the third smallest of Ireland's 32 counties by area, and the largest in terms of population. It is the third-smallest of Leinster's 12 counties in size and the largest by population. Dublin shares a border with three counties – Meath to the north and west, Kildare to the west and Wicklow to the south. To the east, Dublin has an Irish Sea coastline which stretches for .
Dublin is a topographically varied region. The city centre is generally very low-lying, and many areas of coastal Dublin are at or near sea-level. In the south of the county, the topography rises steeply from sea-level at the coast to over in just a few kilometres. This natural barrier has resulted in densely populated coastal settlements in Dún Laoghaire–Rathdown and westward urban sprawl in South Dublin. In contrast, Fingal is generally rural in nature and much less densely populated than the rest of the county. Consequently, Fingal is significantly larger than the other three local authorities and covers about 49.5% of County Dublin's land area. Fingal is also perhaps the flattest region in Ireland, with the low-lying Naul Hills rising to a maximum height of just .
Dublin is bounded to the south by the Wicklow Mountains. Where the mountains extend into County Dublin, they are known locally as the Dublin Mountains ("Sléibhte Bhaile Átha Cliath"). Kippure, on the Dublin–Wicklow border, is the county's highest mountain, at above sea level. Crossed by the Dublin Mountains Way, they are a popular amenity area, with Two Rock, Three Rock, Tibradden, Ticknock, Montpelier Hill, and Glenasmole being among the most heavily foot-falled hiking destinations in Ireland. Forest cover extends to over within the county, nearly all of which is located in the Dublin Mountains. With just 6.5% of Dublin under forest, it is the 6th least forested county in Ireland.
Much of the county is drained by its three major rivers – the River Liffey, the River Tolka in north Dublin, and the River Dodder in south Dublin. The Liffey, at in length, is the 8th longest river in Ireland, and rises near Tonduff in County Wicklow, reaching the Irish Sea at the Dublin Docklands. The Liffey cuts through the centre of Dublin city, and the resultant Northside–Southside divide is an often used social, economic and linguistic distinction. Notable inlets include the central Dublin Bay, Rogerstown Estuary, the estuary of the Broadmeadow and Killiney Bay, under Killiney Hill. Headlands include Howth Head, Drumanagh and the Portraine Shore. In terms of biodiversity, these estuarine and coastal regions are home to a wealth ecologically important areas. County Dublin contains 11 EU-designated Special Areas of Conservation (SACs) and 11 Special Protection Areas (SPAs).
The bedrock geology of Dublin consists primarily of Lower Carboniferous limestone, which underlies about two thirds of the entire county, stretching from Skerries to Booterstown. During the Lower Carboniferous (ca. 340 Mya), the area was part of a warm tropical sea inhabited by an abundance of corals, crinoids and brachiopods. The oldest rocks in Dublin are the Cambrian shales located on Howth Head, which were laid down ca. 500 Mya. Disruption following the closure of the Iapetus Ocean approximately 400 Mya resulted in the formation of granite. This is now exposed at the surface from the Dublin Mountains to the coastal areas of Dún Laoghaire. 19th-century Lead extraction and smelting at the Ballycorus Leadmines caused widespread lead poisoning, and the area was once nicknamed "Death Valley".
Climate.
Dublin is in a maritime temperate oceanic region according to Köppen climate classification. Its climate is characterised by cool winters, mild humid summers, and a lack of temperature extremes. Met Éireann have a number of weather stations in the county, with its two primary stations at Dublin Airport and Casement Aerodrome.
Annual temperatures typically fall within a narrow range. In Merrion Square, the coldest month is February, with an average minimum temperature of , and the warmest month is July, with an average maximum temperature of . Due to the urban heat island effect, Dublin city has the warmest summertime nights in Ireland. The average minimum temperature at Merrion Square in July is , similar to London and Berlin, and the lowest July temperature ever recorded at the station was on 3 July 1974. At Dublin Airport, the driest month is February with of rainfall, and the wettest month is November, with of rain on average.
As the prevailing wind direction in Ireland is from the south and west, the Wicklow Mountains create a rain shadow over much of the county. Dublin's sheltered location makes it the driest place in Ireland, receiving only about half the rainfall of the west coast. Ringsend in the south of Dublin city records the lowest rainfall in the country, with an average annual precipitation of . The wettest area of the county is the Glenasmole Valley, which receives of rainfall per year. As a temperate coastal county, snow is relatively uncommon in lowland areas; however, Dublin is particularly vulnerable to heavy snowfall on rare occasions where cold, dry easterly winds dominate during the winter.
During the late summer and early autumn, Dublin can experience Atlantic storms, which bring strong winds and torrential rain to Ireland. Dublin was the county worst-affected by Hurricane Charley in 1986. It caused severe flooding, especially along the River Dodder, and is reputed to be the worst flood event in Dublin's history. Rainfall records were shattered across the county. Kippure recorded of rain over a 24-hour period, the greatest daily rainfall total ever recorded in Ireland. The government allocated IR£6,449,000 (equivalent to US$20.5 million in 2020) to repair the damage wrought by Charley. The two reservoirs at Bohernabreena in the Dublin Mountains were upgraded in 2006 after a study into the impact of Hurricane Charley concluded that a slightly larger storm would have caused the reservoir dams to burst, which would have resulted in catastrophic damage and significant loss of life.
Offshore islands.
In contrast with the Atlantic Coast, the east coast of Ireland has relatively few islands. County Dublin has one of the highest concentrations of islands on the Irish east coast. Colt Island, St. Patrick's Island, Shenick Island and numerous smaller islets are clustered off the coast of Skerries, and are collectively known as the "Skerries Islands Natural Heritage Area". Further out lies Rockabill, which is Dublin's most isolated island, at about offshore. Lambay Island, at , is the largest island off Ireland's east coast and the easternmost point of County Dublin. Lambay supports one of the largest seabird colonies in Ireland and, curiously, also supports a population of non-native Red-necked wallabies. To the south of Lambay lies a smaller island known as Ireland's Eye – the result of a mistranslation of the island's Irish name by invading Vikings.
Bull Island is a man-made island lying roughly parallel to the shoreline which began to form following the construction of the Bull Wall in 1825. The island is still growing and is currently long and wide. In 1981, North Bull Island ("Oileán an Tairbh Thuaidh") was designated as a UNESCO biosphere.
Subdivisions.
For statistical purposes at European level, the county as a whole forms the Dublin Region – a NUTS III entity – which is in turn part of the Eastern and Midland Region, a NUTS II entity. Each of the local authorities have representatives on the Eastern and Midland Regional Assembly.
Baronies.
There are ten historic baronies in the county. While baronies continue to be officially defined units, they ceased to have any administrative function following the Local Government Act 1898, and any changes to county boundaries after the mid-19th century are not reflected in their extent. The last boundary change of a barony in Dublin was in 1842, when the barony of Balrothery was divided into Balrothery East and Balrothery West. The largest recorded barony in Dublin in 1872 was Uppercross, at , and the smallest barony was Dublin, at .
Townlands.
Townlands are the smallest officially defined geographical divisions in Ireland. There are 1,090 townlands in Dublin, of which 88 are historic town boundaries. These town boundaries are registered as their own townlands and are much larger than rural townlands. The smallest rural townlands in Dublin are just 1 acre in size, most of which are offshore islands ("Clare Rock Island, Lamb Island, Maiden Rock, Muglins, Thulla Island"). The largest rural townland in Dublin is 2,797 acres ("Caastlekelly"). The average size of a townland in the county (excluding towns) is 205 acres.
Urban and rural districts.
Under the Local Government (Ireland) Act 1898, County Dublin was divided into urban districts of Blackrock, Clontarf, Dalkey, Drumcondra, Clonliffe and Glasnevin, Killiney and Ballybrack, Kingstown, New Kilmainham, Pembroke, and Rathmines and Rathgar, and the rural districts of Balrothery, Celbridge No. 2, North Dublin, Rathdown, and South Dublin.
Howth, formerly within the rural district of Dublin North, became an urban district in 1919. Kingstown was renamed Dún Laoghaire in 1920. The rural districts were abolished in 1930.
Balbriggan, in the rural district of Balrothery, had town commissioners under the Towns Improvement (Ireland) Act 1854. This became a town council in 2002. In common with all town councils, it was abolished in 2014.
The urban districts were gradually absorbed by the city of Dublin, except for four coastal districts of Blackrock, Dalkey, Dún Laoghaire, and Killiney and Ballybrack, which formed the borough of Dún Laoghaire in 1930.
Counties and the city.
The city of Dublin had been administered separately since the 13th century. Under the Local Government (Ireland) Act 1898, the two areas were defined as the administrative county of Dublin and the county borough of Dublin, with the latter in the city area.
In 1985, County Dublin was divided into three electoral counties: Dublin–Belgard to the southwest (South Dublin from 1991), Dublin–Fingal to the north (Fingal from 1991), and Dún Laoghaire–Rathdown to the southeast.
On 1 January 1994, under the Local Government (Dublin) Act 1993, the County Dublin ceased to exist as a local government area, and was succeeded by the counties of Dún Laoghaire–Rathdown, Fingal and South Dublin, each coterminous (with minor boundary adjustments) with the area of the corresponding electoral county. In discussing the legislation, Avril Doyle TD said, "The Bill before us today effectively abolishes County Dublin, and as one born and bred in these parts of Ireland I find it rather strange that we in this House are abolishing County Dublin. I am not sure whether Dubliners realise that that is what we are about today, but in effect that is the case."
Although the Electoral Commission should, as far as practicable, avoid breaching county boundaries when recommending Dáil constituencies, this does not include the boundaries of a city or the boundary between the three counties in Dublin. There is also still a sheriff appointed for County Dublin.
The term "County Dublin" is still in common usage. Many organisations and sporting teams continue to organise on a County Dublin basis. The Placenames Branch of the Department of Rural and Community Development and the Gaeltacht maintains a Placenames Database that records all placenames, past and present. County Dublin is listed in the database along with the subdivisions of that county. It is also used as an address for areas within Dublin outside of the Dublin postal district system.
For a period in 2020 during the COVID-19 pandemic, to reduce person-to-person contact, government regulations restricted activity to "within the county in which the relevant residence is situated". Within the regulations, the local government areas of "Dún Laoghaire–Rathdown, Fingal, South Dublin and Dublin City" were deemed to be a single county (as were the city and the county of Cork, and the city and the county of Galway).
The latest Ordnance Survey Ireland "Discovery Series" (Third Edition 2005) 1:50,000 map of the Dublin Region, Sheet 50, shows the boundaries of the city and three surrounding counties of the region. Extremities of the Dublin Region, in the north and south of the region, appear in other sheets of the series, 43 and 56 respectively.
Local government.
There are four local authorities whose remit collectively encompasses the geographic area of the county and city of Dublin. These are Dublin City Council, South Dublin County Council, Dún Laoghaire–Rathdown County Council and Fingal County Council.
Until 1 January 1994, the administrative county of Dublin was administered by Dublin County Council. From that date, its functions were succeeded by Dún Laoghaire–Rathdown County Council, Fingal County Council and South Dublin County Council, each with its county seat, respectively administering the new counties established on that date.
The city was previously designated a county borough and administered by Dublin Corporation. Under the Local Government Act 2001, the country was divided into local government areas of cities and counties, with the county borough of Dublin being designated a city for all purposes, now administered by Dublin City Council. Each local authority is responsible for certain local services such as sanitation, planning and development, libraries, the collection of motor taxation, local roads and social housing.
Dublin, comprising the four local government areas in the county, is a strategic planning area within the Eastern and Midland Regional Assembly (EMRA). It is a NUTS Level III region of Ireland. The region is one of eight regions of Ireland for Eurostat statistics at NUTS 3 level. Its NUTS code is IE061.
This area formerly came under the remit of the Dublin Regional Authority. This Authority was dissolved in 2014.
Demographics.
Population.
As of the 2022 census, the population of Dublin was 1,458,154, an 8.4% increase since the 2016 Census. The county's population first surpassed 1 million in 1981, and is projected to reach 1.8 million by 2036.
Dublin is Ireland's most populous county, a position it has held since the 1926 Census, when it overtook County Antrim. As of 2022, County Dublin has over twice the population of County Antrim and two and a half times the population of County Cork. Approximately 21% of Ireland's population lives within County Dublin (28% if only the Republic of Ireland is counted). Additionally, Dublin has more people than the combined populations of Ireland's 16 smallest counties.
With an area of just , Dublin is by far the most densely populated county in Ireland. The population density of the county is 1,582 people per square kilometre – over 7 times higher than Ireland's second most densely populated county, County Down in Northern Ireland.
During the Celtic Tiger period, a large number of Dublin natives (Dubliners) moved to the rapidly expanding commuter towns in the adjoining counties. As of 2022, approximately 27.2% (345,446) of Dubliners were living outside of County Dublin. People born within Dublin account for 28% of the population of Meath, 32% of Kildare, and 37% of Wicklow. There are 922,744 Dublin natives living within the county, accounting for 63.3% of the population. People born in other Irish counties living within Dublin account for roughly 11% of the population.
Between 2016 and 2022, international migration produced a net increase of 88,300 people. Dublin has the highest proportion of international residents of any county in Ireland, with around 25% of the county's population being born outside of the Republic of Ireland.
As of the 2022 census, 5.6 percent of the county's population was reported as younger than 5 years old, 25.7 percent were between 5 and 25, 55.3 percent were between 25 and 65, and 13.4 percent of the population was older than 65. Of this latter group, 48,865 people (3.4 percent) were over the age of 80, more than doubling since 2016. Across all age groups, there were slightly more females (51.06 percent) than males (48.94 percent).
In 2021, there were 16,596 births within the county, and the average age of a first time mother was 31.9.
Migration.
Over a quarter (25.2 percent) of County Dublin's population was born outside of the Republic of Ireland. In 2022, Dublin City had the highest percentage of non-nationals in the county (27.3 percent), and South Dublin had the lowest (20.9 percent). Historically, the immigrant population of Dublin was mainly from the United Kingdom and other European Union member states. However, results from the 2022 census revealed that immigrants from non-EU/UK countries were the largest source of foreign-born residents for the first time, accounting for 12.9 percent of the county's population. Those from other European Union member states accounted for 8.3 percent of Dublin's population, and those from the United Kingdom a further 4.1 percent.
Prior to the 2000s, the UK was consistently the largest single source of non-nationals living in Dublin. After declining in the previous two census periods, the number of UK-born residents living in Dublin increased by 5.8 percent between 2016 and 2022. There was a large difference between the number of people living in Dublin who were born in the UK (58,586) and those who held sole-UK citizenship in the 2022 census (22,936). This discrepancy can arise for a variety of factors, such as people born in Northern Ireland claiming Irish citizenship rather than UK citizenship, Irish people born in the UK who now live in Dublin, British people who have become natural citizens, and foreign residents of Dublin who were born in the UK but are not UK citizens. Depending on an individual's responses in the census, all of these examples could result in the country of birth being registered by the CSO as the United Kingdom, but nationality being registered as Irish or a third country.
Following its accession to the EU, the Polish quickly became the fastest growing immigrant community in Dublin. Just 188 Poles applied for Irish work permits in 1999. By 2006 this number had grown to 93,787. After the 2008 Irish economic downturn, as many as 3,000 Poles left Ireland each month. Despite this, Poles remain one of Dublin's largest foreign-born groups. In contrast to more recent arrivals, a large percentage of Dublin's Polish citizens (30.9 percent) also hold Irish citizenship.
Outside of Europe, Indians and Brazilians are the predominant foreign-national groups. As of 2022, Indians were the fastest growing major immigrant group in Dublin, and they are now the county's second largest foreign-born group after the UK. Dublin's Indian community grew by 155.2 percent between 2016 and 2022. There were 29,582 Indian-born residents within Dublin as of 2022, up from 9,884 in the 2011 census. The influx of Indians is driven in part by multinational tech companies such as Microsoft, Google and Meta who have located their European headquarters within the county, in areas such as the Silicon Docks and Sandyford. In August 2020, the first dedicated Hindu temple in Ireland was built in Walkinstown.
The number of Brazilian citizens living in Dublin more than tripled between 2011 and 2022, from 4,641 to 16,441. This increase is mainly a result of Ireland's participation in the Brazilian government's "Ciência sem Fronteiras" programme, which sees thousands of Brazilian students come to study in Ireland each year, many of whom remain in the country afterwards.
Although not fully captured during the census period, Dublin also houses a significant number of Ukrainian refugees under the Temporary Protection Directive. As of October 2023, the number of Ukrainians living in emergency accommodation within the county is estimated to be around 14,000.
Ethnicity.
According to the Central Statistics Office, in 2022 the population of County Dublin self-identified as:
In terms of total numbers, Dublin has the largest non-white population in Ireland, with an estimated 158,653 residents, accounting for 11.1% of the county's population. Over two-fifths (42.2 percent) of Ireland's black residents live within the county. In terms of percentage of population, Fingal has the highest percentage of both black (3.6 percent) and non-white (12.4 percent) residents of any local authority in Ireland. Conversely, Dún Laoghaire–Rathdown in the south of the county has one of Ireland's lowest percentages of black residents, with only 0.77% of the population identifying as black in 2022. Additionally, 43.3% of Ireland's multiracial population lives within County Dublin. Those who did not state their ethnicity more than doubled between 2016 and 2022, from 4.1% to 8.5%.
Religion.
The largest religious denomination by both number of adherents and as a percentage of Dublin's population in 2022 was the Roman Catholic Church, at 57.4 percent. All other Christian denominations including Church of Ireland, Eastern Orthodox, Presbyterian and Methodist accounted for 8.1 percent of Dublin's population. Together, all denominations of Christianity accounted for 65.5 percent of the county's population. According to the 2022 census, Dún Laoghaire–Rathdown is the least religious local authority in Ireland, with 23.9 percent of the population declaring themselves non-religious, followed closely by Dublin city (22.6 percent). In the county as a whole, those unaffiliated with any religion represented 20.1 percent of the population, which is the largest percentage of non-religious people of any county in Ireland. A further 9.1 percent of the population did not state their religion, up from just 4.1 percent in 2016.
Of the non-Christian religions, Islam is the largest in terms of number of adherents, with Muslims accounting for 2.6% of the population. After Islam, the largest non-Christian religions in 2022 were Hinduism (1.4 percent) and Buddhism (0.27 percent). While relatively small in absolute terms, County Dublin contains over half of Ireland's Hindu (58.7 percent) residents, and just under half of its Eastern Orthodox (45.3 percent), Islamic (45.0 percent) and Buddhist (41.7 percent) residents.
Dublin and its hinterland has been a Christian diocese since 1028. For centuries, the Primacy of Ireland was disputed between Dublin, the social and political capital of Ireland, and Armagh, site of Saint Patrick's main church, which was founded in 445 AD. In 1353 the dispute was settled by Pope Innocent VI, who proclaimed that the archbishop of Dublin was "Primate of Ireland", while the archbishop of Armagh was titled "Primate of All Ireland". These two distinct titles were replicated in the Church of Ireland following the Reformation. Historically, County Dublin was the epicentre of Protestantism in Ireland outside of Ulster. Records from the 1891 census show that the county was 21.4 percent Protestant towards the end of the 19th century. By the 1911 census this had gradually declined to around 20% due to poor economic conditions, as Dublin Protestants moved to industrial Belfast. Following the War of Independence (1919–1921), Dublin's Protestant community went into a steady decline, falling to 8.5 percent of the population by 1936.
Between 2016 and 2022, the fastest-growing religions in Dublin were Hinduism (148.9 percent), Eastern Orthodox (51.6 percent), and Islam (27.9 percent), while the most rapidly declining religions were Evangelicalism (−10.4 percent), Catholicism (−8.7 percent), Jehovah's Witnesses (−5.9 percent) and Buddhism (−5.4 percent).
Metropolitan area.
Dublin city.
The boundaries of Dublin City Council form the urban core of the city, often referred to as "Dublin city centre", an area of 117.8 square kilometres. This encompasses the central suburbs of the city, extending as far south as Terenure and Donnybrook; as far north as Ballymun and Donaghmede; and as far west as Ballyfermot. As of 2022, there were 592,713 people living within Dublin city centre. However, as the continuous built-up area extends beyond the city boundaries, the term "Dublin city and suburbs" is commonly employed when referring to the actual extent of Dublin.
Dublin city and suburbs.
Dublin city and suburbs is a CSO-designated urban area which includes the densely populated contiguous built-up area which surrounds Dublin city centre. As of the 2022 census, Dublin city and suburbs encompassed 345 km2, expanding in size by 8.7 percent (or 27.5 km2) since the 2016 census. The population of Dublin city and suburbs grew from 1,173,179 in 2016 to 1,263,219 in 2022, an increase of 7.7 percent.
Following the 2022 census, Dublin city and suburbs was designated a cross-county settlement for the first time, as the CSO included the Kribensis Manor housing development within the contiguous built-up area of the city. The houses are located in County Meath, along the R149 road between Hilltown and the village of Clonee.
Approximately 87% of County Dublin's population lives within Dublin city and suburbs as of the 2022 census. The remainder of the county covers roughly two thirds of Dublin's land area, but is home to just 196,140 people.
Dublin metropolitan area.
As the city proper does not extend beyond Dublin Airport, nearby towns such as Swords, Donabate, Portmarnock and Malahide are not considered part of the city, and are recorded by the CSO as separate settlements. However, under Ireland's National Planning Framework, these towns are considered part of the Dublin Metropolitan Area Strategic Plan (MASP). The MASP also includes towns outside of the county, such as Naas, Leixlip and Maynooth in County Kildare, Dunboyne in County Meath, and Bray, Kilmacanogue and Greystones in County Wicklow, but does not include Balbriggan, Lusk, Rush or Skerries, which are located in the far north of County Dublin.
Greater Dublin Area.
The Greater Dublin Area (GDA) is a commonly used planning jurisdiction which extends to the wider network of commuter towns that are economically connected to Dublin city. The GDA consists of County Dublin and its three neighboring counties, Kildare, Meath and Wicklow.
With a population of 2.1 million and an area of 6,986 square kilometres, it contains 40% of the population of the State, and covers 9.9% of its land area.
Urban areas.
Under CSO classification, an "urban area" is a town with a population greater than 1,500. Dublin is the most urbanised county in Ireland, with 98% of its residents residing in urban areas as of 2022. Of Dublin's three non-city local authorities, Fingal has the highest proportion of people living in rural areas (7.9%), while Dún Laoghaire–Rathdown has the lowest (1.19%). The western suburbs of Dublin city such as Tallaght and Blanchardstown have experienced rapid growth in recent decades, and both areas have a population roughly equivalent to Galway city.
Transportation.
County Dublin has the oldest and most extensive transportation infrastructure in Ireland. The Dublin and Kingstown Railway, opened in December 1834, was Ireland's first railway line. The line, which ran from Westland Row to Dún Laoghaire, was originally intended to be used for cargo. However, it proved far more popular with passengers and became the world's first commuter railway line. The line has been upgraded multiple times throughout its history and is still in use to this day, making it the oldest commuter railway route in the world.
Public transport in Dublin was managed by the Dublin Transportation Office until 2009, when it was replaced by the National Transport Authority (NTA). The three pillars currently underpinning the public transport network of the Greater Dublin Area (GDA) are Dublin Suburban Rail, the Luas and the bus system. There are six commuter lines in Dublin, which are managed by Iarnród Éireann. Five of these lines serve as routes between Dublin and towns across the GDA and beyond. The sixth route, known as Dublin Area Rapid Transit (DART), is electrified and serves only Dublin and northern Wicklow. The newest addition to Dublin's public transport network is a tram system called the Luas. The service began with two disconnected lines in 2004, with three extensions opened in 2009, 2010 and 2011 before a cross-city link between the lines and further extension opened in 2017.
Historically, Dublin had an extensive tram system which commenced in 1871 and at its peak had over of active line. It was operated by the Dublin United Transport Company (DUTC) and was very advanced for its day, with near-full electrification from 1901. From the 1920s onwards, the DUTC began to acquire private bus operators and gradually closed some of its lines. Further declines in passenger numbers were driven in part by a belief at the time that trams were outdated and archaic. All tram lines terminated in 1949, except for the tram to Howth, which ran until 1959.
Dublin Bus is the county's largest bus operator, carrying 138 million passengers in 2019. For much of the city, particularly west Dublin, the bus is the only public transport option available, and there are numerous smaller private bus companies in operation across County Dublin. National bus operator Bus Éireann provides long-distance routes to towns and villages located outside of Dublin city and its immediate hinterland.
In November 2005, the government announced a €34 billion initiative called Transport 21 which included a substantial expansion to Dublin's transport network. The project was cancelled in May 2011 in the aftermath of the 2008 recession. Consequently, by 2017 Hugh Creegan, deputy chief of the NTA, stated that there had been a ""chronic underinvestment in public transport for more than a decade". By 2019, Dublin was reportedly the 17th most congested city in the world, and had the 5th highest average commute time in the European Union. The Luas and rail network regularly experience significant overcrowding and delays during peak hours, and in 2019 Iarnród Éireann was widely ridiculed for asking commuters to "stagger morning journeys"" to alleviate the problem.
The M50 is a orbital motorway around Dublin city, and is the busiest motorway in the country. It serves as the centre of both Dublin and Ireland's motorway network, and most of the national primary roads to other cities begin at the M50 and radiate outwards. The current route was built in various sections over the course of 27 years, from 1983 to 2010. All major roads in Ireland are managed by Transport Infrastructure Ireland (TII), which is headquartered in Parkgate Street, Dublin 8. As of 2019, there were over 550,000 cars registered in County Dublin, accounting for 25.3% of all cars registered in the State. Due to the county's small area and high degree of urbanisation, there is a preference for "D" registered used cars throughout Ireland, as they are considered to have undergone less wear and tear.
For international travel, around 1.7 million passengers travel by ferry through Dublin Port each year. A Dún Laoghaire to Holyhead ferry was formerly operated by Stena Line, but the route was closed in 2015. Dublin Airport is Ireland's largest airport, and 32.9 million passengers passed through it in 2019, making it Europe's 12th-busiest airport.
Economy.
The Dublin Region, which is conterminous with County Dublin, has the largest and most highly developed economy in Ireland, accounting for over two-fifths of national Gross Domestic Product (GDP). The Central Statistics Office estimates that the GDP of the Dublin Region in 2020 was €157.2 billion ($187 billion / £141 billion at 2020 exchange rates). In nominal terms, Dublin's economy is larger than roughly 140 sovereign states. The county's GDP per capita is €107,808 ($117,688 / £92,620), one of the highest regional GDPs per capita in the EU. As of 2019, Dublin also had the highest Human Development Index in Ireland at 0.965, placing it among the most developed places in the world in terms of life expectancy, education and per capita income.
Affluence.
In 2020, average disposable income per person in Dublin was €27,686, or 118% of the national average (€23,400), the highest of any county in Ireland. As Ireland's most populous county, Dublin has the highest total household income in the country, at an estimated €46.8 billion in 2017 – higher than the Border, Midlands, West and South-East regions combined. Dublin residents were the highest per capita tax contributors in the State, returning a total of €15.1 billion in taxes in 2017.
Many of Ireland's most prominent political, educational, cultural and media centres are concentrated south of the River Liffey in Dublin city. Further south, areas like Dún Laoghaire, Dalkey and Killiney have long been some of Dublin's most affluent areas, and Dún Laoghaire–Rathdown consistently has the highest average house prices in Ireland. This has resulted in a perceived socio-economic divide in Dublin, between the generally less affluent Northside and the wealthier Southside. In Dublin (both city and county), residents will commonly refer to themselves as a "Northsider" or a "Southsider", and the division is often caricatured in Irish comedy, media and literature, for example Ross O'Carroll-Kelly and Damo and Ivor. References to the divide have also become colloquialisms in their own right, such as "D4" (referring to the Dublin 4 postal district), which is a pejorative term for an upper middle class Irish person.
While the northside-southside divide remains prevalent in popular culture, economic indices such as the Pobal HP deprivation index have shown that the distinction does not reflect economic reality. Many of Dublin's most affluent areas (Clontarf, Raheny, Howth, Portmarnock, Malahide) are located in the north of the county, and many of its most deprived areas (Jobstown, Ballyogan, Ballybrack, Dolphin's Barn, Clondalkin) are located in the south of the county.
Utilising CSO data from the past three censuses, Pobal HP revealed that there was a much higher concentration of below average, disadvantaged and very disadvantaged areas in west Dublin. In 2012, Irish Times columnist Fintan O'Toole posited that the real economic divide in Dublin was not north–south, but east–west – between the older coastal areas of eastern Dublin and the newer sprawling suburbs of western Dublin – and that the perpetuation of the northside–southside "myth" was a convenient way to gloss over class division within the county. O'Toole argued that framing the city's wealth divide as a light-hearted north–south stereotype was easier than having to address the socio-economic impacts of deliberate government policy to remove working-class people from the city centre and settle them on the margins.
Finance.
Dublin is both a European and Global financial hub, and around 200 of the world's leading financial services firms have operations within the county. In 2017 and 2018 respectively, Dublin was ranked 5th in Europe and 31st globally in the Global Financial Centres Index (GFCI). In the mid-1980s, parts of central Dublin had fallen into a state of dereliction and the Irish government pursued an urban regeneration programme. An 11-hectare special economic zone (SEZ) was set up in 1987, known as the International Financial Services Centre (IFSC). At the time of its establishment, the SEZ had the lowest corporate tax rate in the EU. The IFSC has since expanded into a 37.8-hectare site centred around the Dublin Docklands. As of 2020, over €1.8 trillion of funds are administered from Ireland.
There was renewed interest in Dublin's financial services sector in the wake of the UK's vote to withdraw from the European Union in 2016. Many firms, including Barclays and Bank of America, pre-emptively moved some of their operations from London to Dublin in anticipation of restricted EU market access. A survey conducted by Ernst & Young in 2021 found that Dublin was the most popular destination for firms in the UK considering relocating to the EU, ahead of Luxembourg and Frankfurt. It is estimated that Dublin's financial sector will grow by about 25% as a direct result of Brexit, and as many as 13,000 jobs could move from the UK to County Dublin in the years immediately after its withdrawal.
Industry and energy.
The economy of Dublin benefits from substantial amounts of both indigenous and foreign investment. In 2018, the Financial Times ranked Dublin the most attractive large city in the world for Foreign Direct Investment, and the city has been consistently ranked by Forbes as one of the world's most business-friendly. The economy is centered on financial services, the pharmaceuticals and biotechnology industries, information technology, logistics and storage, professional services, agriculture and tourism. IDA Ireland, the state agency responsible for attracting foreign direct investment, was founded in Dublin in 1949.
Dublin has four power plants, all of which are concentrated in the docklands area of Dublin city. Three are natural-gas plants operated by the ESB, and the Poolbeg Incinerator is operated by Covanta Energy. The four plants have a combined capacity of 1.039 GW, roughly 12.5% of the island of Ireland's generation capacity as of 2019. The disused Poolbeg chimneys are the tallest structures in the county, and were granted protection by Dublin city council in 2014.
As a result of Dublin city's location within a sheltered bay at the mouth of a navigable river, shipping has been a key industry in the county since medieval times. By the 18th-century, Dublin was a bustling maritime city and large-scale engineering projects were undertaken to enhance the port's capacity, such as the Great South Wall, which was the largest sea wall in the world at the time of its construction in 1715. Dublin Port was originally located along the Liffey, but gradually moved towards the coast over the centuries as vessel size increased. It is today the largest and busiest port in Ireland. It handles 50% of the Republic of Ireland's trade, and receives 60% of all vessel arrivals.
Dublin Port occupies an area of in one of the most expensive places in the country, with an estimated price per acre of around €10 million. Since the 2000s, there have been calls to relocate Dublin Port out of the city and free up its land for residential and commercial development. This was first proposed by the Progressive Democrats at the height of the Celtic Tiger in 2006, who valued the land at between €25 and €30 billion, although nothing became of this proposal. During the housing crisis of the late 2010s the idea again began to attract supporters, among them economist David McWilliams. Currently, there are no official plans to move the port elsewhere, and the Dublin Port Company strongly opposes relocation.
Dublin hosts the headquarters of some of Ireland's largest multinational corporations, including 14 of the 20 companies which make up the ISEQ 20 index – those with the highest trading volume and market capitalisation of all Irish Stock Exchange listed companies. These are: AIB, Applegreen, Bank of Ireland, Cairn Homes, Continental Group, CRH, Dalata Hotel Group, Flutter Entertainment, Greencoat Renewables, Hibernia REIT, IRES, Origin Enterprises, Ryanair and Smurfit Kappa.
Tourism.
County Dublin receives by far the most overseas tourists of any county in Ireland. This is primarily due to Dublin city's status as Ireland's largest city and its transportation hub. Dublin is also Ireland's most popular destination for domestic tourists. According to Fáilte Ireland, in 2017 Dublin received nearly 6 million overseas tourists, and just under 1.5 million domestic tourists. Most of Ireland's international flights transit through Dublin Airport, and the vast majority of passenger ferry arrivals dock at Dublin Port. In 2019, the port also facilitated 158 cruise ship arrivals. The tourism industry in the county is worth approximately €2.3 billion per year.
As of 2019, 4 of the top 10 fee-paying tourist attractions in Ireland are located within County Dublin, as well as 5 of the top 10 free attractions. The Guinness Storehouse at St. James's Gate is Ireland's most visited tourist attraction, receiving 1.7 million visitors in 2019, and over 20 million total visits since 2000. Additionally, Dublin also contains Ireland's 3rd (Dublin Zoo), 4th (Book of Kells) and 6th (St Patrick's Cathedral) most visited fee-paying attractions. The top free attractions in Dublin are the National Gallery of Ireland, the National Botanic Gardens, the National Museum of Ireland and the Irish Museum of Modern Art, all of which receive over half a million visitors per year.
Agriculture.
Despite having the smallest farmed area of any county, Dublin is one of Ireland's major agricultural producers. Dublin is the largest producer of fruit and vegetables in Ireland, the third largest producer of oilseed rape and has the fifth largest fishing industry. Fingal alone produces 55% of Ireland's fresh produce, including soft fruits and berries, apples, lettuces, peppers, asparagus, potatoes, onions, and carrots. As of 2020, the Irish Farmers' Association estimates that the total value of Dublin's agricultural produce is €205 million. According to the CSO, fish landings in the county are worth a further €20 million.
Approximately 41% of the county's land area (38,576 ha) is farmed. Of this, is under tillage, the 9th highest in the country, and is dedicated to fruit & horticulture, the 4th highest. Rural County Dublin is considered a peri-urban region, where an urban environment transitions into a rural one. Due to the growth of Dublin city and its commuter towns in the north of the county, the region is considered to be under significant pressure from urban sprawl. Between 1991 and 2010, the amount of agricultural land within the county decreased by 22.9%. In 2015, the local authorities of Fingal, South Dublin and Dún Laoghaire–Rathdown developed a joint Dublin Rural Local Development Strategy aimed at enhancing the region's agricultural output, while also managing and minimising the impact of urbanisation on biodiversity and the identity and culture of rural Dublin.
The county has a small forestry industry that is based almost entirely in the upland areas of south County Dublin. According to the 2017 National Forestry Inventory, of the county was under forest, of which was private forestry. The majority of Dublin's forests are owned by the national forestry company, Coillte. In the absence of increased private planting, the county's commercial timber capacity is expected to decrease in the coming decades, as Coillte intends to convert much of their holdings in the Dublin Mountains into non-commercial mixed forests.
Dublin has 810 individual farms with an average size of , the largest average farm size of any county in Ireland. Roughly 9,400 people within the county are directly employed in either agriculture or the food and drink processing industry. Numerous Irish and multinational food and drink companies are either based in Dublin or have facilities within the county, including Mondelez, Coca-Cola, Mars, Diageo, Kellogg's, Danone, Ornua, Pernod Ricard and Glanbia. In 1954, Tayto Crisps were established in Coolock and developed into cultural phenomenon throughout much of the Republic of Ireland. Its operations and headquarters have since moved to neighbouring County Meath. Another popular crisp brand, Keogh's, are based in Oldtown.
Education.
In Ireland, spending on education is controlled by the government and the allocation of funds is decided each year in the annual budget. Local authorities retain limited responsibilities such as funding for school meals, service supports costs and the upkeep of libraries.
There are hundreds of primary and secondary schools within County Dublin, most of which are English-language schools. Several international schools are based in Dublin, such as St Kilian's German School and Lycée Français d'Irlande, which teach in foreign languages. There is also a large minority of students attending gaelscoileanna (Irish-language primary schools). There are 34 gaelscoileanna and 10 gaelcholáistí (Irish-language secondary schools) in the county, with a total of 12,950 students as of 2018. In terms of college acceptance rates, gaelcholáistí are consistently the best performing schools in Dublin, and among the best performing in Ireland.
Although the government pays for a large majority of school costs, including teachers' salaries, the Roman Catholic Church is the largest owner of schools in Dublin, and preference is given to Catholic students over non-Catholic students in oversubscribed areas. This has resulted in a growing movement towards non-denominational and co-educational schools in the county.
The majority of private secondary schools in Dublin are still single sex, and continue to have religious patronages with either congregations of the Catholic Church (Spiritans, Sisters of Loreto, Jesuits) or Protestant denominations (Church of Ireland, Presbyterian). Newer private schools which cater for the Leaving Cert cycle such as the Institute of Education and Ashfield College are generally non-denominational and co-educational. In 2018, Nord Anglia International School Dublin opened in Leopardstown, becoming the most expensive private school in Ireland.
As of 2023–24, four of Dublin's third level institutions are listed in the Top 500 of either the Times Higher Education Rankings or the QS World Rankings, placing them amongst the top 5% of all third level institutions in the world. TCD (81), UCD (171) and DCU (436) are within the Top 500 of the QS rankings; and TCD (161), RCSI (201–250), UCD (201–250) and DCU (451–500) and are within the Top 500 of the Times rankings. Newly amalgamated TUD also placed within the world's Top 1,000 universities in the QS rankings, and within the Top 500 for Engineering and Electronics.
County Dublin has four public universities, as well as numerous other colleges, institutes of technology and institutes of further education. Several of Dublin's largest third level institutions and their associated abbreviations are listed below:
Politics.
Elections.
For elections to Dáil Éireann, the area of the county is currently divided into eleven constituencies: Dublin Bay North, Dublin Bay South, Dublin Central, Dublin Fingal, Dublin Mid-West, Dublin North-West, Dublin Rathdown, Dublin South-Central, Dublin South-West, Dublin West, and Dún Laoghaire. Together they return 45 deputies (TDs) to the Dáil.
The first Irish Parliament convened in the small village of Castledermot, County Kildare on 18 June 1264. Representatives from seven constituencies were present, one of which was the constituency of Dublin City. Dublin was historically represented in the Irish House of Commons through the constituencies of Dublin City and County Dublin. Three smaller constituencies had been created by the 17th century: Swords; which was created sometime between 1560 and 1585, with Walter Fitzsimons and Thomas Taylor being its first recorded MPs; Newcastle in the west of the county, created in 1613; and Dublin University, which was a university constituency covering Trinity College, also created in 1613. While proceedings of the Irish Parliament were well-documented, many of the records from this time were lost during the shelling of the Four Courts in July 1922.
Following the Acts of Union 1800, Dublin was represented in Westminster through three constituencies from 1801 to 1885: Dublin City, County Dublin and the Dublin University. A series of local government and electoral reforms in the late 19th century radically alerted the county's political map, and by 1918 there were twelve constituencies within County Dublin.
Throughout the twentieth century the representation in Dublin expanded as the population grew. In the Electoral Act 1923, the first division of constituencies arranged by Irish legislation, geographical constituencies in Dublin were 23 of the 147 TDs in geographical constituencies; this contrasts with 45 of 160 at the most recent division.
Twenty-three Dáil Éireann constituencies have been created and abolished within the county since independence, the most recent being the constituencies of Dublin South, Dublin North, Dublin North-Central, Dublin North-East and Dublin South-East, which were abolished in 2016.
Of the fifteen people to have held the office of Taoiseach since 1922, more than half were either born or raised within County Dublin: W. T. Cosgrave, John A. Costello, Seán Lemass, Liam Cosgrave, Charles Haughey (born in County Mayo but raised in Dublin), Garret FitzGerald, Bertie Ahern and Leo Varadkar (Cosgrave held the office of President of the Executive Council; by convention, Taoisigh are numbered to include this position). Conversely, just one of Ireland's nine presidents have hailed from the county, namely Seán T. O'Kelly, who served as president from 1945 to 1959.
European elections.
The four local government areas in County Dublin form the 4-seat constituency of Dublin in European Parliament elections.
National government.
As the capital city, Dublin is the seat of the national parliament of Ireland, the Oireachtas. It is composed of the president of Ireland, Dáil Éireann as a house of representatives, and Seanad Éireann as an upper house. Both houses of the Oireachtas meet in Leinster House, a former ducal palace on Kildare Street. It has been the home of the Irish government since the creation of the Irish Free State. The First Dáil of the revolutionary Irish Republic met in the Round Room of the Mansion House, the present-day residence of the lord mayor of Dublin, in January 1919. The former Irish Parliament, which was abolished in 1801, was located at College Green; Parliament House now holds a branch of Bank of Ireland. Government Buildings, located on Merrion Street, houses the Department of the Taoiseach, the Council Chamber, the Department of Finance, and the Office of the Attorney General.
The president resides in Áras an Uachtaráin in Phoenix Park, a stately ranger's lodge built in 1757. The house was bought by the Crown in 1780 to be used as the summer residence of the lord lieutenant of Ireland, the British viceroy in the Kingdom of Ireland. Following independence, the lodge was earmarked as the potential home of the governor-general, but this was highly controversial as it symbolised continued British rule over Ireland, so it was left empty for many years. President Douglas Hyde "temporarily" occupied the building in 1938, as Taoiseach Éamon de Valera intended to demolish it and build a more modest presidential bungalow on the site. Those plans were scrapped during The Emergency and the lodge became the president's permanent residence.
Much like Áras an Uachtaráin, many of the grand estate homes of the former aristocracy were re-purposed for State use in the 20th century. The Deerfield Residence, also in Phoenix Park, is the official residence of the United States ambassador to Ireland, while Glencairn House in south Dublin is used as the British ambassador's residence. Farmleigh House, one of the Guinness family residences, was acquired by the government in 1999 for use as the official Irish state guest house.
Many other prominent judicial and political organs are located within Dublin, including the Four Courts, which is the principal seat of the Supreme Court, the Court of Appeal, the High Court and the Dublin Circuit Court; and the Custom House, which houses the Department of Housing, Local Government and Heritage. Once the centuries-long seat of the British government's administration in Ireland, Dublin Castle is now only used for ceremonial purposes, such as policy launches, hosting of State visits, and the inauguration of the president.
Social issues and ideology.
Dublin is among the most socially liberal places in Ireland, and popular sentiment on issues such as LGBT rights, abortion and divorce has often foreran the rest of the island. Referendums held on these issues have consistently received much stronger support within Dublin, particularly the south of the county, than the majority of the country. While over 66% of voters nationally voted in favour of the Eighth Amendment in 1983, 58% of voters in Dún Laoghaire and 55% in Dublin South voted against it. In 2018, over 75.5% of voters in County Dublin voted to repeal the amendment, compared with 66.4% nationally.
In 1987, Dublin Senator David Norris took the Irish government to the European Court of Human Rights (see "Norris v. Ireland") over the criminalisation of homosexual acts. In 1988, the court ruled that the law criminalising same sex activities was contrary to the European Convention on Human Rights, in particular Article 8 which protects the right to respect for private life. The law was held to infringe on the right of adults to engage in acts of their own choice. This led directly to the repeal of the law in 1993. Numerous LGBT events and venues are now located within the county. Dublin Pride is an annual pride parade held on the last Saturday of June and is Ireland's largest public LGBT event. In 2018, an estimated 60,000 people attended. During the 2015 vote to allow same-sex marriage, 71% of County Dublin voted in favour, compared with 62% nationally.
In general, the south-eastern coastal regions of the county such as Dún Laoghaire and Dublin Bay South are a stronghold for the liberal-conservative Fine Gael party. Since the late-2000s the Green Party has also developed a strong support base in these areas. The democratic socialist Sinn Féin party generally performs well in south-central and west Dublin, in areas like Tallaght and Crumlin. In recent elections Sinn Féin have increasingly taken votes in traditional Labour Party areas, whose support has been on the decline since 2016. As a result of the economic crisis, centre-right Fianna Fáil failed to gain a single seat in Dublin in the 2011 general election. This was a first for the long-time dominant party of Irish politics. The party regained a footing in 7 of the 11 Dublin constituencies in 2020, and were also the largest party in Dublin City, Fingal and South Dublin in the 2019 local elections.
Sport.
GAA.
Dublin is a dual county in Gaelic games, and it competes at a similar level in both hurling/camogie and Gaelic football. The Dublin county board is the governing body for Gaelic games within the county. The county's current GAA crest, adopted in 2004, represents Dublin's four constituent areas. The castle represents Dublin city, the raven represents Fingal, the Viking longboat represents Dún Laoghaire–Rathdown and the book of Saint Tamhlacht in the centre represents South Dublin.
In Gaelic football, the Dublin county team competes annually in Division 1 of the National Football League and the provincial Leinster Senior Football Championship. Dublin is the dominant force of Leinster football, with 62 Leinster Senior Championship wins. Nationally, the county is second only to Kerry for All-Ireland Senior Football Championship titles. The two counties are fierce rivals, and a meeting between them is considered the biggest game in Gaelic football. Dublin has won the All-Ireland on 31 occasions, including a record 6 in a row from 2015 to 2020.
In hurling, the Dublin hurling team currently compete in Division 1B of the National Hurling League and in the Leinster Senior Hurling Championship. Dublin is the second most successful hurling county in Leinster after Kilkenny, albeit a distant second, with 24 Leinster hurling titles. The county has seen less success in the All-Ireland Senior Hurling Championship, ranking joint-fifth alongside Wexford. Dublin has been in 21 All-Ireland hurling finals, winning just 6, the most recent of which was in 1938.
Within the county, Gaelic football and hurling clubs compete in the Dublin Senior Football Championship and the Dublin Senior Hurling Championship, which were both established in 1887. St Vincents based in Marino and Faughs based in Templeogue are by far the most successful clubs in Dublin their respective sports. Four Dublin football teams have won the All-Ireland Senior Club Football Championship; St Vincents, Kilmacud Crokes, UCD and Ballyboden St Enda's. Despite their historic dominance in Dublin, Faughs have never won an All-Ireland Senior Club Hurling Championship. Since the early 2010s, Dalkey's Cuala have been the county's main hurling force, and the club won back-to-back All-Ireland's in 2017 and 2018.
Soccer.
Association football (soccer) is one of the most popular sports within the county. While Gaelic games are the most watched sport in Dublin, association football is the most widely played, and there are over 200 amateur football clubs in County Dublin. Dalymount Park in Phibsborough is known as the "home of Irish football", as it is both the country's oldest stadium and the former home ground for the national team from 1904 until 1990. The Republic of Ireland national football team is currently based in the 52,000 seater Aviva Stadium, which was built on the site of the old Lansdowne Road stadium in 2010. The Aviva Stadium has hosted the final of the UEFA Europa League twice, in 2011 and 2024. Five League of Ireland football clubs are based within County Dublin; Bohemians F.C., Shamrock Rovers, St Patrick's Athletic, University College Dublin and Shelbourne.
Shamrock Rovers, formerly of Milltown but now based in Tallaght, are the most successful club in the country, with 21 League of Ireland titles. They were also the first Irish side to reach the group stages of a European competition when they qualified for the 2011–12 UEFA Europa League group stage. The Dublin University Football Club, founded in 1854, are technically the world's oldest extant football club. However, the club currently only plays rugby union. Bohemians are Ireland's third oldest club currently playing football, after Belfast's Cliftonville F.C. and Athlone Town A.F.C. The Bohemians–Shamrock Rovers rivalry not only involves Dublin's two biggest clubs, but it is also a Northside-Southside rivalry, making it the most intense derby match in the county.
Other sports.
Rugby Union is the county's third most popular sport, after Gaelic games and football. Leinster Rugby play their competitive home games in the RDS Arena & the Aviva Stadium. Donnybrook Stadium hosts Leinster's friendlies and A games, as well as the Ireland A and Women's teams, Leinster Schools and Youths and the home club games of All Ireland League sides Old Wesley and Bective Rangers. County Dublin is home to 13 of the senior rugby union clubs in Ireland, including 5 of the 10 sides in the top division 1A.
Other popular sports in the county include: cricket, hockey, golf, tennis, athletics and equestrian activities. Dublin has two ODI cricket grounds in Castle Avenue and Malahide Cricket Club Ground, and the Phoenix Cricket Club, founded in 1830, is the oldest in Ireland. As with many other sporting organisations in the county, the Fitzwilliam Lawn Tennis Club is one of the world's oldest. It hosted the now-discontinued Irish Open from 1879 until 1983. Field hockey, particularly women's field hockey, is becoming increasingly popular within the county. The Ireland women's national field hockey team made it to the 2018 World Cup final, and many of the players on that team were from Dublin clubs, such as UCD, Old Alex, Loreto, Monkstown, Muckross and Railway Union.
The Dublin Horse Show takes place at the RDS, which hosted the Show Jumping World Championships in 1982, and the county has a horse racing track at Leopardstown which hosts the Irish Champion Stakes every September. Dublin houses the national stadium for both boxing (National Stadium) and basketball (National Basketball Arena), and the city hosted the 2003 Special Olympics. Although a small county in size, Dublin contains one third of Leinster's 168 golf courses, and three-time major winner Pádraig Harrington is from Rathfarnham.
Media.
Local radio stations include 98FM, FM104, Dublin City FM, Q102, SPIN 1038, Sunshine 106.8, Raidió Na Life and Radio Nova.
Local newspapers include "The Echo", and the "Liffey Champion".
Most of the area can receive the five main UK television channels as well as the main Irish channels, along with Sky TV and Virgin Media Ireland cable television.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.